Tuesday, February 28, 2023

China and AI

Power in the Age of AI | Robert Wright & Paul Scharre

00:37 Paul’s new book, Four Battlegrounds: Power in the Age of Artificial Intelligence
11:42 Will AIs replace humans in future wars?
29:24 China as an AI superpower
39:29 Will Beijing spread high-tech authoritarianism globally?
48:51 How useful is the “democracy versus autocracy” framework?
54:01 Why Paul’s worried about a race to the bottom in AI safety
1:00:00 Tom Cruise can’t hold a candle to AI pilots


My current thinking about ChatGPT @3QD [Gärdenfors, Wolfram, and the value of speculation]

Some of my thinking anyhow, but by no means all of it. My thinking about the Chatster goes off in many directions, too many to chase down and corral for a single article. Here it is:

ChatGPT is a miracle of rare device. Here’s how I’m thinking about it.

Levels of description and analysis

Much of the article revolves around the question: What’s ChatGPT doing? I presented an idea that David Marr advanced back in the late 1970s and early 1980s: We must describe and analyze the behavior of complex information systems – he was a neuroscientist interested in vision – on several levels. I think we need to do the same with large language models, of which ChatGPT is now the most widely known example.

The company line on LLMs is that they work by statistically guided next-token prediction. I don’t doubt that, but I don’t find it very helpful either. It’s like saying a laptop computer works by executing a fetch-execute cycle. Well, yes it does, and so does every other digital computer. More to the point, that’s how every program is run, whether it’s the operating system, a word processor, a browser, a printer driver, etc. That’s what’s going on at the bottom level.

In the case of a word processor, the top-level processes include such things as: create a new document, save a document, cut text, past text, check the spelling, apply a style to a block of text, and so forth. Those are actions taken by the user. What happens between those actions and the bottom-level fetch-execute is defined by processes implemented in low-level and high-level languages. Each of those processes was programmed by a human programmer. So, in theory, we know everything about what’s going on in a word processor, or, for that matter, any other kind of program.

Things are quite different with LLMs. At the top-level users are issuing prompts and the LLM is responding to them. How does it do that? By issuing word after word after word based on the statistical model it built up during training. What happens in between the bottom level and the top level?

We don’t know. And, all too often, we don’t care. As long as it runs and does impressive things, we don’t care how it works.  

That’s no way to create the future. 

[Check out David Chapman, How to understand AI systems.]

Peter Gärdenfors’ geometry of meaning

Perhaps the fascinating work Peter Gärdenfors has being doing in semantics can help. He has been developing a geometric concept of meaning. His two books:

Conceptual Spaces: The Geometry of Thought, MIT 2000.
The Geometry of Meaning: Semantics Based on Conceptual Spaces, MIT 2014.

I’m not going to attempt even a quick sketch of his ideas – you can find a bit in this post, The brain, the mind, and GPT-3: Dimensions and conceptual spaces – but I’ll offer a brief passage from Conceptual Spaces, p. 253:

On the symbolic level, searching, matching, of symbol strings, and rule following are central. On the subconceptual level, pattern recognition, pattern transformation, and dynamic adaptation of values are some examples of typical computational processes. And on the intermediate conceptual level, vector calculations, coordinate transformations, as well as other geometrical operations are in focus. Of course, one type of calculation can be simulated by one of the others (for example, by symbolic methods on a Turing machine). A point that is often forgotten, however, is that the simulations will, in general be computationally more complex than the process that is simulated.

The top-level processes of LLMs, such as ChatGPT, are operating at the symbolic level. Those processes are to be described by grammars at the sentence level and by various kinds of discourse models above the sentence level. My 3QD article presents some evidence about how ChatGPT structures stories. That behavior is symbolic and so has to be understood in terms of actions on and with symbols. See, e.g.:

Christopher D. Manning, Kevin Clark, John Hewitt, Urvashi Khandelwal, and Omer Levy, Emergent linguistic structure in artificial neural networks trained by self-supervision, PNAS, Vol. 117, No. 48, June 3, 2020, pp. 30046-30054, https://doi.org/10.1073/pnas.1907367117.

Andrew M. Saxea, James L. McClelland, and Surya Gangulic, A mathematical theory of semantic development in deep neural networks. PNAS, June 4, 2019, Vol. 116, No. 23, 11537-11546, https://www.pnas.org/content/116/23/11537.

What’s going on at the subconceptual level, that is, the bottom level, and the intermediate level? For that I want to turn to Stephen Wolfram.

Wolfram on ChatGPT

Wolfram has written a long, quasi-technical, and quite useful article, What Is ChatGPT Doing … and Why Does It Work? He makes extensive use of concepts from complex dynamics in his account. For the sake of argument let’s say that’s what ChatGPT is doing at the bottom-level. Perhaps between these two we have Gärdenfors’”intermediate conceptual level” with its “vector calculations” and “other geometrical operations.”

Let’s scroll down through Wolfram’s article to the section, “Meaning Space and Semantic Laws of Motion.” Wolfram observes:

We discussed above that inside ChatGPT any piece of text is effectively represented by an array of numbers that we can think of as coordinates of a point in some kind of “linguistic feature space”. So when ChatGPT continues a piece of text this corresponds to tracing out a trajectory in linguistic feature space.

Given that the idea of a linguistic feature space is very general, Gärdenfors’ geometric semantics is certainly an account of something that can be called a “linguistic feature space.”

Wolfram has been working on example where he follows GPT-2 from the prompt: “The best thing about AI...” After having shown illustrations of a feature space, he asks: So what about trajectories? We can look at the trajectory that a prompt for ChatGPT follows in feature space—and then we can see how ChatGPT continues that (click on the diagrams to embiggen):

There’s certainly no “geometrically obvious” law of motion here. And that’s not at all surprising; we fully expect this to be a considerably more complicated story. And, for example, it’s far from obvious that even if there is a “semantic law of motion” to be found, what kind of embedding (or, in effect, what “variables”) it’ll most naturally be stated in.

In the picture above, we’re showing several steps in the “trajectory”—where at each step we’re picking the word that ChatGPT considers the most probable (the “zero temperature” case). But we can also ask what words can “come next” with what probabilities at a given point:

And what we see in this case is that there’s a “fan” of high-probability words that seems to go in a more or less definite direction in feature space. What happens if we go further? Here are the successive “fans” that appear as we “move along” the trajectory:

Keeping in mind that this is a space of very high dimensionality, are those "more or less definite directions in feature space" the sort of thing we'd find in Gärdenfors' conceptual spaces? Here’s what he says in The Geometry of Meaning (p. 21):

A central idea is that the meanings we use in communication can be described as organized in abstract spatial structures that are expressed in terms of dimensions, distances, regions, and other geometric notions. In addition, I also use some notions from vector algebra.

That surely sounds like it’s in the right ballpark. That does not mean, of course, that it is. But surely it is worth investigating.

The role of speculation in scientific investigation

The deep learning community puts on a great show of intellectual rigor. And in some ways, it is not merely a show. The rigor is there. The technology wouldn’t work as well as it does if it weren’t rigorous in some substantial way.

But there is little rigor that I can see in the way they think about language and texts. I see relatively little knowledge about linguistics, psycholinguistics, and related areas of cognitive science.

Nor is there much interest in figuring out what happens with those 175 billion parameters as they generate text. There is some work being done on reverse engineering (aka mechanistic interpretability) the operations of these engines. There needs to be more, much more – see this article by David Chapman for suggestions, Do AI as science and engineering instead.

Speculation is a necessary part of this process. In order to go boldly where none have gone before you are going to have to speculate. It can’t be helped. Sooner or later some speculation will turn out to be correct, that is, it will be supported by evidence. There is no way to determine that ahead of time. But make them as rigorous and detailed as you can. Speculation must be clear and crisp, otherwise it is not a reliable guide for thought.

More later, much more.

A Small Surprise on my Academia.edu page

I’d expected that the increased activity that started on the 20th would have subsided by now. Things were certainly headed in that direct, but then yesterday, the 27th, they started back up. The cause of that upswing is not obvious. My article at 3 Quarks Daily had some effect. So did my piece above, Faculty psychology and the will. And LessWrong is still sending folks over. 

Where will the numbers go today?

Feb 28 – 5:44 AM – Looking back thirty days

Feb 28 – 5:44 AM – Looking back sixty days

Where things stood on the 24nd.

Where things stood on the 22nd.

Monday, February 27, 2023

Chat GPT for robots

Here's to you

Faculty psychology and the will [cognitive science meeets history of ideas]

This is a section from my article, Cognitive Networks and Literary Semantics. It's a bit crippled without the context provided by that article. It refers to "nodes" and "on-blocks" and "sensorimotor schema" and other things, all of which are explained in previous sections of the article. But still, you should be able to get the general idea.

The article refers here and there to the following figure: Commonsense Think. Click on the image to embigen.

* * * * *

The notion that man consists of three souls (or a single soul which is tripartite) and a body is deeply embedded in our own intellectual tradition. In Primitive Man the Philosopher (New York: Dover, 1957), Paul Radin has shown that the Oglala Sioux, the Masai, and the Batak of Sumatra also believe that man consists of three souls and a body (pp. 257-74). He goes on to suggest that such a belief may well be a cultural universal. It may or it may not be, but the fact that similar theories appear on four continents (North America, Europe, Africa, Asia) suggests that the task those theories perform, an account of human nature, is highly constrained.

I am presently entertaining the hypothesis that such a theory is an attempt by the cognitive network to explain the relationship between the SELF node and the rest of the nervous system. If one believes (and that is all it is, a matter of what one believes) the soul to be tripartite, then the SELF has two components, a body and a soul which is tripartite, with rational, sensitive, and vegetative sub- divisions. If one prefers to believe that one has three souls, then the SELF has three soul components and one body component, four components in sum.

The peripheral nervous system has two divisions, the somatic and the autonomic. The somatic system mediates voluntary control of the skeletal muscles and the activities of vision, hearing, touch, etc. The autonomic system regulates breathing, heart beat, digestion, the control of temperature, etc. Any episode which represents a transaction between the cognitive network and a sensorimotor schema whose intensities (first order input functions) are of somatic (perhaps just somatic motor) origin is cognized as being done by the body. The vegetative soul is cognized as the agent responsible for transactions between the network and sensorimotor schemas whose intensities are of autonomic origin. Episodes (which, you will recall, are conscious) in which one runs, jumps, spears hunks of meat, etc. are executed by the body. Episodes in which one feels hunger, lust, cold, thirst, etc. are cognized as being felt by the vegetative soul.

Responsibility for episodes of on-blocks is assigned to the sensitive soul. An on-block in which the condition is autonomic and the act to be executed is somatic (such as Figure 4) is a motivational on-block. Lust is the motivation behind seeking out another person, hunger is the motivation behind seeking out food. If the condition is somatic and the act is autonomic and perhaps somatic as well, then the on-block is emotive. One sees a bear (autonomic) and adrenaline is pumped into the bloodstream (autonomic) and one runs away (somatic).

Episodes of (commonsense) thinking are attributed to the rational soul. Since the actual process of thinking (the commonsense notion) is carried out by the abstraction system (Figure 5), the rational soul is simply a first order copy of the second order network's regulation of the interaction between the semantic and the syntactic networks.

The entire activity of the network is thinking, where "thought" is a technical term in a theory about the functioning of the brain. Since the souls are defined over the channel structure of cognitive episodes (autonomic, somatic, semantic, and syntactic are the channels), it follows that any cognitive use of these concepts is thinking about thought (technical sense). In The Growth of Logical Thinking (Basic Books, 1958) Bärbel Inhelder and Jean Piaget assert that the child isn't able to think about thinking until adolescence. Consequently the child can't really acquire the structures of those souls until adolescence. In fact, since all abstract concepts involve thinking about thought (pattern matching over episodes), it follows that abstractions aren't really learned until adolescence.

It turns out that the system is capable of constructing accounts of abstract concepts which can be represented entirely within the first order network. Courage, an abstract concept, might be handled like this: Courage is when someone does something which is dangerous to himself and which is important when he could easily avoid doing it at all. The child could learn such a rationalization without the aid of a fully developed abstraction system, but his understanding of the concept would be rather shallow and inflexible. Since the nodes in the abstraction system cannot be provided with names, it follows that we can only communicate directly about rationalizations. The processing of a rationalization into a real abstract definition, involving the operation of the abstraction system on episodes, is internalization, just as the processing of episodes into systemic concepts is (see above). The intuitive part of any intellectual enterprise is likely to reflect the operation of the abstraction system in using its abstract schemas. The difficult work of giving verbal form to those intuitions by constructing first order accounts of the abstract concepts is rationalization. Faculty psychology is a rationalization of the activity of the nervous system.

In the Elizabethan rationalization of the nervous system spirit was introduced as a tertium quid between the material body and the immaterial soul(s). The rational soul exerted control over the body through the intellectual spirit or spirits; the sensitive soul worked through the animal spirit; and the vegetative soul worked through the vital spirit. Madness could be rationalized as the loss of intellectual spirit which causes a situation in which the rational soul can no longer control the body. The body is consequently under control by man's lower nature, the sensitive and vegetative souls. Such a rationalization is perfectly respectable and, in its own limited way, not inaccurate. But it hardly constitutes a scientific theory of madness. A scientific theory of madness would have to be constructed in terms of conflict and contradiction within the control system; it might well be that, within such a theory, the SELF system (three souls, body and SELF node) is responsible for much of the conflict.

Getting back to the Elizabethan rationalization, the loss of control by the intellectual soul cripples the Will, one of the faculties lodged in the rational soul. A complex concept node can he aroused when a lexeme which names it has been detected in a speech signal or when the abstract schema which defines it has been aroused by some episode which is an instance of the concept. When excitation travels from the SELF node, along a component (CMP) arc to one of the souls or to the body and hence to an episode which the system proceeds to execute, that episode has been willed by the SELF. When excitation travels in the opposite direction, from episode to the body node or to one of the soul nodes and then up a CMP arc to the SELF node, the SELF is being done unto, it is not willing the episode currently in consciousness. An episode of hunger appears, the abstraction systems arouses the vegetative soul node and excitation travels from there to the SELF node, SELF is hungry, SELF is subject to hunger-for the SELF has not willed the excitation of the hunger episode. On hunger, one must find food. Excitation travels from the SELF node to the sensitive soul node and the body node and hence to an episode structure which contains a plan in which the sensitive soul controls the body in a search for food. That search is willed, for the excitation which resulted in the execution of a search episode started with the SELF node.

Notice that think, the process carried out by the rational soul, is defined in such a way that any thought (commonsense) about hunger or searching for food is registered as an episode in the rational soul. That is as it should be; for we can see, with the aid of cognitive network theory, that the souls of faculty psychology are all abstract concepts constructed in the cognitive network by the abstraction system. The concepts of the body, the vegetative soul, the sensitive soul, and the rational soul are all constructed of the same stuff-nodes and arcs realized in neural tissue. The system which operates in terms of those concepts sees them as very different things, for they have different definitions. But that system of thought was not sophisticated enough to construct rationalizations asserting that all the faculties are abstract concepts in a cognitive network. Cognitive network theory makes it possible to construct accounts of the souls in such a way that we can begin to examine the way in which the system's rationalized account of the interaction between the SELF node and the rest of the nervous system affects and effects the integrated activity of that entire nervous system.


Rick Beato interviews the great Keith Jarrett

Beato's notes:

Keith Jarrett is an American jazz pianist and composer born in Allentown, Pennsylvania in 1945. He is considered one of the most influential and accomplished jazz musicians of his generation, having recorded dozens of albums as a solo artist and as a leader of various ensembles. Known for his virtuosic piano playing and improvisational skills, Jarrett has been recognized for his unique ability to blend elements of jazz, classical music, and world music into his compositions. He has collaborated with many of the world's top musicians and has received numerous awards throughout his career, cementing his place in the pantheon of jazz greats.

This is my interview with Keith Jarrett.

Special thanks to: Pat Ryan, Keith Williams, David Bendeth, Steve Cloud, Caroline Fontanieu, Martin Geyer, ECM Records and very special thanks to Akiko Jarrett.

It's all good, but pay particular attention to the section where Beato plays Jarrett a solo improvisation on Miles Davis's "Solar" that Jarrett had done in the 1980s. From that point to the end of the interview is superb. Excellent music – there's another piece of music after than. And an important document in music history.

Beato introduces that segment at about 31:01. Beato juxtaposes video of Jarrett listening NOW with video of Jarrett's playing THEN. The performance is masterful and entrancing. Notice how Jarrett played the piano with his whole body, often rising up from the piano bench and dancing at the keyboard – something he can no longer do as a consequence of strokes he suffered in 2018. Watch Jarrett's face as he listens.

Beato: That's riding the wave.

Jarrett: Wow, that's great.

Beato: That is really just amazing. It just all flow. Nothing is, is anything even going through your mind when you're playing. You're just channeling, right?

Jarrett: Yeah.

From the Wikipedia:

Jarrett's race has been a source of commentary by media and activists throughout his career, as he has reported being recurrently mistaken as a Black person. In a 2000 interview with Terry Gross, Jarrett relates an incident at the Heidelberg Jazz Festival in the Rhine-Neckar region of Germany when he was protested by Black musicians for something akin to cultural appropriation. He also tells of a separate moment in his career when black jazz musician Ornette Coleman approached him backstage, and "said something like 'Man, you've got to be black. You just have to be black.'" Jarrett replied "I know. I know. I'm working on it."

Me too, me too.

Check out my collection of anecdotes, Emotion and Magic in Musical Performance, Version 11.

Sunday, February 26, 2023

Evolutionary innovations that gave rise to the mammalian six-layered neocortex

From the linked article:

Trade-offs in brain development

Salamander brains share some, but not all, structures with the mammalian brain. They also have greater capacity to regenerate in response to damage. Three groups now come together with single-cell transcriptomics analyses that set the salamander brain in evolutionary context (see the Perspective by Faltine-Gonzalez and Kebschull). By comparing salamander brains with those of lizard, turtle, and mouse, Woych et al. track the evolutionary innovations that gave rise to the mammalian six-layered neocortex, which salamanders do not have. Lust et al. take a close look at why the axolotl brain is so much more capable of regeneration than is the mammalian brain. Finally, Wei et al. compare the developmental and regenerative processes in the axolotl brain. —PJH


The evolution of advanced cognition in vertebrates is associated with two independent innovations in the forebrain: the six-layered neocortex in mammals and the dorsal ventricular ridge (DVR) in sauropsids (reptiles and birds). How these innovations arose in vertebrate ancestors remains unclear. To reconstruct forebrain evolution in tetrapods, we built a cell-type atlas of the telencephalon of the salamander Pleurodeles waltl. Our molecular, developmental, and connectivity data indicate that parts of the sauropsid DVR trace back to tetrapod ancestors. By contrast, the salamander dorsal pallium is devoid of cellular and molecular characteristics of the mammalian neocortex yet shares similarities with the entorhinal cortex and subiculum. Our findings chart the series of innovations that resulted in the emergence of the mammalian six-layered neocortex and the sauropsid DVR.


Things you might dream about [All at once? – Why not?] [hallucinated city]

The benefits of dreaming

Carolyn Todd. REM Sleep Is Magical. Here’s What the Experts Know. NYTimes. Feb 25, 2023.

When we sleep we go cycle through five stages of sleep. We dream during one of those stages. When we dream we move our eyes. Thus dream sleep is called REM sleep – Rapid Eye Movement.


If you’ve ever gone to bed upset about something and woken up noticeably less bothered, it’s likely a result of the emotional processing and memory reconsolidation that happen during REM. There’s evidence that your brain divorces memories from their emotional charge — removing the “sharp, painful edges” from life’s difficulties, said Matthew Walker, a professor of neuroscience and psychology and the founder and director of the Center for Human Sleep Science at the University of California, Berkeley. REM is “like a form of overnight therapy,” he said.

REM also makes us better learners. During this sleep stage, your brain strengthens neural connections formed by the previous day’s experiences and integrates them into existing networks, Dr. Robbins said.

Dr. Walker added: “We take those new pieces of information and start colliding them with our back catalog of stored information. It’s almost a form of informational alchemy.”

These novel connections also make us more creative, he said. “We wake up with a revised mind-wide web of associations” that helps us solve problems. Researchers in Dr. Walker’s lab conducted a small study where people were roused from different stages of sleep and asked to solve anagram puzzles. They found that subjects awakened from REM sleep solved 32 percent more anagrams than subjects who were interrupted during non-REM sleep.

Then, of course, there’s dreaming: The majority of our vivid dreaming takes place during REM. Some experts suspect that dreams are a mere byproduct of REM sleep — the mental manifestation of neurological work. But others think they might help people process painful experiences, Dr. Walker said.

And although most physical processes, like repairing bone and muscle tissues, happen during the non-REM sleep stages, some hormonal changes occur while someone is in REM, Dr. Walker said, like the release of testosterone (which peaks at the onset of the first REM cycle).

If we don't get enough:

But large deficits of REM sleep, no matter your age, can deprive you of its psychological benefits, Dr. Dasgupta said. You may have more trouble learning, processing emotional experiences or solving problems.

Dysregulated REM sleep is also linked with cognitive and mental health issues, like slower thinking and depression, said Dr. Ana Krieger, medical director of the Center for Sleep Medicine at Weill Cornell Medicine. Too little REM, fragmented REM and REM sleep behavior disorder — where muscle paralysis fails to happen and people physically act out their dreams, often by kicking or punching — are associated with neurological issues, from mild forgetfulness to dementia and Parkinson’s disease.

There's more at the link.

Saturday, February 25, 2023

On the mystery train to Hoboken

Denying death the American way

Jeffrey Kluger, Why Americans Are Uniquely Afraid to Grow Old, Time, Feb. 23, 2023.

Why are Americans age averse?

But apart from fear of death—which, admittedly, is hard to get around—why exactly do Americans resist aging so much? It’s a privilege that is denied to too many, after all. And it comes with a raft of advantages like wisdom, respect, and for many, a comfortable retirement. So what is it exactly that makes us all so age-averse?

For one thing, argues Sheldon Solomon, professor of psychology at Skidmore College, and, at 69, a Baby Boomer himself, America’s senior cohort comes from a uniquely privileged background, one that has left them with the feeling that the frailties that come with aging—and even death itself—are not inevitable rites of human passage, but somehow negotiable.


In the 1980s, Thomas Pyszczynski, 68, professor of psychology at the University of Colorado, Colorado Springs, was part of a group of researchers who developed the terror management theory of facing death which, as its name implies, addresses the way we somehow get through our days knowing that somewhere at the end of the existential line lies the utter annihilation of the self. That’s a knowledge that other animals are spared, but it’s one that both haunts and animates our thinking.

“We have this evolved imperative to stay alive,” says Pyszczynski. “So the awareness of death creates this potential for terror. As a result, we use the same intellectual abilities that make us aware of death to manage our fear of it.”

Humans do that in one of two ways. The first is to cultivate a belief in literal immortality. “We detoxify death with the hope of living in an afterlife—like reincarnation,” Pyszczynski says. “Every culture has its own version of afterlife beliefs.” The other, less direct means is symbolic immortality. “That’s what people get by being part of something greater than themselves—something that will last forever, like having children or creating works of art, or building buildings. We leave a mark that ensures the world—or at least our families—will remember us.”

Americans are no different from others in leaning both on faith in an afterlife and producing good works in this one as a palliative for our fear of our own mortality. But as Solomon says, our culture—and particularly the Boomer segment—is pushing back against those old ways too.

“I think we just never got out of the Disneyland idea that life was always going to get better,” he says.

There’s more at the link.

Friday, February 24, 2023

Language Model Crossover

Abstract of the linked paper:

This paper pursues the insight that language models naturally en- able an intelligent variation operator similar in spirit to evolutionary crossover. In particular, language models of sufficient scale demon- strate in-context learning, i.e. they can learn from associations between a small number of input patterns to generate outputs in- corporating such associations (also called few-shot prompting). This ability can be leveraged to form a simple but powerful variation op- erator, i.e. to prompt a language model with a few text-based geno- types (such as code, plain-text sentences, or equations), and to parse its corresponding output as those genotypes’ offspring. The promise of such language model crossover (which is simple to implement and can leverage many different open-source language models) is that it enables a simple mechanism to evolve semantically-rich text representations (with few domain-specific tweaks), and naturally benefits from current progress in language models. Experiments in this paper highlight the versatility of language-model crossover, through evolving binary bit-strings, sentences, equations, text-to- image prompts, and Python code. The conclusion is that language model crossover is a promising method for evolving genomes rep- resentable as text.

Cat Island off the coast of Japan

The life of a professor at a German university in 1900

Irwin Collier, Germany. University life seen through American eyes. Tupper, 1900-1901, Economics in the Rear-View Mirror, Feb 23, 23.

Now for the German professor! The last generation has seen the passing of the old type that appears in “Fliegende Blätter” and “Jugend,” grimly bespectacled, long-haired, absent-minded [5]. He is now usually a capable, practical and responsible man of affairs, whom the dust of the schools has not blinded. He has made sacrifices for the higher end, for his upward progress has been slow. After his doctor’s examination, following three years of advanced work, he decided to forego an oberlehrer’s or higher school teacher’s position with its seemingly princely salary of thirty-six hundred marks (nine hundred dollars), and to take his place on the lowest rung of the university ladder, as “Private- docent,” with fees of perhaps eight hundred marks. His undoubted ability and enthusiasm attracted students (perhaps too much stress is laid on his drawing power), and after some two or three years of very lean kine, he became extraordinary or associate professor. In the meantime he “scorns delights and lives laborious days.” He can take no steps towards soliciting a vacant professorship; but his “opus,” on which he has labored so faithfully appears. His name is up from Freiburg to Konigsberg. A call to a chair in a larger university, Berlin or Munich, comes, and he is a made man of social rank and comfortable income. He is, henceforth, an oracle among men, and his fame draws many wandering students to his university.

The fields of usefulness of the professor are three: His lectures, his personal association with students and his research. As a rule he is not a good lecturer, immeasurably inferior to his compatriot of the Sorbonne, who is nearly always a golden talker, and not approaching the best American or even English standards. There are, of course, many exceptions. Harnack and Willamowitz-Wollendorf drew and still draw large crowds to the “publicum” or public lectures; and few of us will forget the delight with which we listened to Dessoir discourse for many hours on Fine Arts. But Harnack and Willamowitz were giants and Dessoir had French blood. I think my statement holds—the lectures are often well planned, but they are too heavily burdened with fact, are poorly delivered and lack inspiration. Mountains of method, a thousand details, but few vistas and little illumination. The German professor is a social being. I remember how one great-hearted, deeply learned scholar affected young men. At the “kneipes” or feasts of his students he sat at the head of the table (wherever he sat would have been the head) directing the talk and joining lustily in the songs. The reverence for him was great; a quarrel in his presence was felt to be sacrilege, and the love of clash and conflict was nobly repressed. Then he drew men to his home, opening up to them in his study great stores of special knowledge, stimulating, quickening them by the force of his personality and example. I shall always recall long walks with him in the “Thiergarten.” His lectures and readings from Shakspere and the English poets (“Vair is voul and voul is vair,” “I could not lofe dee, dear, so mooch”) sometimes appealed to an American sense of humor, but roads traversed with him in private led always to treasures at the foot of the rainbow, and one was very grateful. In research, the German professor is pre-eminent. The way that he cuts is often very narrow, the path that he blazes through the wood of recondite scholarship is wide enough for only one man; but he sets those with whom he has to do journeying in this or that direction with ax and torch. Lights flash and steel rings everywhere, until the forest becomes known ground. Though others may range more extensively and with far better perspective, he has in accurate, painstaking, intensive scholarship, no equal on earth. And he attains and leads others to the goal in the face of at least one tremendous difficulty, a library system unparalleled in impracticability and inefficiency. Lack of catalogues and a poor library staff necessitate an interval of twenty-four hours between the time of ordering a book and its receipt, or rather the time due for its receipt, for, in many cases, when it is not on the shelves, its whereabouts are so uncertain that it may be reclaimed only when its usefulness is passed. All sufferers from this will doff their hats to the men who have triumphed over such conditions.

They liked their sentences long in those days. H/t Tyler Cowen.

A cute little object – a pincushion perhaps? – posed in several settings

Why Plato is important to me

 I wrote this as a comment in response to a post at Marginal Revolution. Tyler Cowen was explaining what he'd learned from Plato when he was in his early teens and was impressed with his method, the dialog.

* * * * *

Ah, Plato. I read Crito in my freshman year at college. It turned out to be very important to me. Well, I'm an intellectual and was philosophy major, but that's not what I'm talking about.

That was the 1960s. A couple years later I drew 12 in the draft lottery, so I was certain to be drafted. What were my options?

First, I could have accepted induction. With my education it was very unlikely the military would have sent me into combat. For that matter, I could even have enlisted. I was well-educated and I got letters from several branches of the military with enlistment offers. No dice. I was opposed to the war.

I could have run to Canada; I knew people who did it. That did not appeal at all. It was the 60s. I could have gotten messed up on drugs before reporting for my pre-induction physical. It was risky, but had worked for some people. It was also undignified. No dice.

I was a middle-class kid at a good school affiliated with a medical school and a hospital (Johns Hopkins). I could have asked around and gotten/paid a psychiatrist to write a letter that likely have gotten me out. No dice. Undignified.

In effect, I took Plato's advice and decided to declare myself a Conscientious Objector. That got people sent to prison during WWII – a lot of Mennonites and 7th Day Adventists. But it was now legit.

In what way was that Plato's advice? In Crito Socrates is in prison after having been condemned to death. A rich friend comes to him, points out that the sentence is unjust, and offers to spring him. Socrates refused, pointing out that he was an Athenian citizen, that as such, the laws of Athens were the framework within which he lived his life. They gave meaning to his life. He couldn't reject them just because he came out on the unfortunate end of a judicial decision. He decided to remain in prison.

Subsquently Crito has become a central text for those undertaking civil disobedience, which is what I was doing. My draft board accepted my petition and I did two years of civilian service, as it was called, in the Chaplain's Office at Johns Hopkins.

Later, when I failed to get tenure and was unable to get another academic gig, Crito has been helpful in thinking about my relationship to the academic world. I have continued to publish, often enough in first-class venues, but more often simply to the web.

One needs principles. Plato is an important source of useful ones. Dialogic thinking is perhaps chief among them, at least in the context of intellectual life.

Things are still lively on my Academia page

Like I reported the on the 22nd, those spikes at the right reflect a lot of action coming in from LessWrong. People seem to be interested in two papers: the tantalizing future paper about GPT, and a review of a book about Ayahuasca.

Thirty-day activity:

Sixty-day activity:

Thursday, February 23, 2023

A vegetable garden in Vermont at 6:40 AM in September

Like I've been saying all along, minds are built from the inside

Gary Lupyan and Andy Clark, Super-cooperators, Aeon. The lede: Clear and direct telepathic communication is unlikely to be developed. But brain-to-brain links still hold great promise

Definition: GOFT = Good Old Fashioned Telepathy.

The get off to a good start:

At the root of GOFT, however, is a problem. For it to work, our thoughts have to be aligned, to have a common format. Alice’s thoughts beamed into Bob’s brain need to be understandable to Bob. But would they be? To appreciate what real alignment actually entails, consider machine-to-machine communication that takes place when Bob sends an email to Alice. For this seemingly simple act to work, Bob and Alice’s computers have to encode letters in the same way (otherwise an ‘a’ typed by Bob would render as something different for Alice). The protocols used by Bob’s and Alice’s machines for transmitting the information (eg, SMTP, POP) also have to be matched. If that email has an attached photo, additional alignment must exist to ensure that the receiving machine can decode the image format (eg, JPG) used by the sender. It is these formats (known collectively as encodings and protocols) that allow machines to ‘understand’ one another. These formats are the products of deliberate engineering and they required universal buy-in. Just as postal systems around the world had to agree to honour each other’s stamps, companies and governments had to agree to use common encodings such as Unicode and protocols such as TCP/IP and SMTP.

But is there any reason to think that our thoughts are aligned in this way? At present, we have no reason to imagine that the neural activity constituting Bob’s thought – for example, I’m in the mood for some truffle risotto – would make any sense to anyone other than Bob (indeed, we are not even certain if Bob’s mental state could be interpreted by Bob himself in a year’s time). How then does Bob communicate his risotto desires to Alice? The obvious solution is to use a natural language like English. To be useful, these systems have to be learned. But, once learned, they allow us to use a common set of symbols (English words) to token particular thoughts in the minds of other English speakers.

It is tempting to assume that the reason why language works as well as it does is that our thoughts are already aligned and language is just a way of communicating them: our thoughts are ‘packaged’ into words and then ‘unpacked’ by a receiver. But this is an illusion. It is telling that even with natural language, conceptual alignment is hard work and drops off without actively using language.

Natural languages thus accomplish a version of what machine protocols and encodings do – they provide a common protocol that (to some extent) bridges the varied formats of our thoughts. Language on this view does not depend on prior conceptual alignment, it helps create it.

We can do this with language because we spend a great deal of time talking with one another and learning how to use language. We are continually negotiating the meanings of words. A bit later, they note:

Instead of viewing communication between people as a transfer of information, we can think about it as a series of actions we perform on one another (and often on ourselves) to bring about effects. The goal of language, thus understood, is not (or is not always) alignment of mental representations, but simply the informed coordination of action. On this picture, successful uses of language need not demand conceptual alignment. This view of language as a lever for coordination, a tool for practical action, can be found in research by Andy Clark (2006), Mark Dingmanse (2017), Christopher Gauker (2002) and Michael Reddy (1979).

That's what I've argued about music. I spelled this out in some detail in my paper, “Rhythm Changes” Notes on Some Genetic Elements in Musical Culture (2015).

They go on to speculate:

With this in mind, imagine now an alternative version of the sender-receiver setups used in Rao’s and Grau’s studies. Instead of instructing people to induce a particular mental state to communicate a predetermined meaning, there is simply a two-way brain-to-brain channel opened up between two or more individuals at a young age. The linked people then carry out various joint projects: they work on school assignments, move couches, fall in love. Might their brains learn to make use of the new channel to help them achieve their goals? This seems (to us, at least) to verge into more plausible territory. Something similar seems to occur when two people, or even a human and a pet, learn to pick up on body language as a clue to what the other person is thinking or intending to do. There, too, a different channel – in this case, vision – with a different target (small bodily motions) conveys an extra layer of useable information – and one not easily replicated by other means.

Setting aside the technical issues, could this work? Note that they stipulate that people be coupled together "at a young age" and that they "learn to make use of the new channel..." Learning is critical. Coordination would not be automatic through this new channel. It has to be learned, constructed.

My working paper, Direct Brain-to-Brain Thought Transfer A High Tech Fantasy that Won't Work (2020), deals with the same issues from a different perspective.

Sunrise in Vermont

AI feeding frenzy: “There’s gold in the them thar hills!”

Peter Coy, We’re Unprepared for the A.I. Gold Rush, NYTimes, Feb. 22, 2023.

Opening paragraphs:

I think I know why artificial intelligence is breaking our all-too-human brains. It’s coming at us too fast. We don’t understand what’s happening inside the black boxes of A.I., and what we don’t understand, we understandably fear. Ordinarily we count on lawmakers and regulators to look out for our interests, but they can’t keep up with the rapid advances in A.I., either.

Even the scientists, engineers and coders at the frontiers of A.I. research appear to be improvising. Early this month, Brad Smith, the vice chair and president of Microsoft, wrote a blog post describing the surprise of company leaders and responsible A.I. experts last summer when they got their hands on a version of what the world now knows as ChatGPT. They realized that “A.I. developments we had expected around 2033 would arrive in 2023 instead,” he wrote.

There are two potential reactions. One is to slam on the brakes before artificial intelligence subverts national security using deep fakes, persuades us to abandon our spouses, or sucks up all the resources of the universe to make, say, paper clips (a scenario some people actually worry about). The opposite reaction is to encourage the developers to forge ahead, dealing with problems as they arise.

A Federal response:

The White House’s Office of Science and Technology Policy came out last year with a more pointed blueprint for an A.I. Bill of Rights that, while nonbinding, contains some intriguing concepts, such as, “You should be able to opt out from automated systems in favor of a human alternative, where appropriate.”

In Congress, the House has an A.I. Caucus with members from both sides of the aisle, including several with tech skills.

Gold rush!

One risk is that the race to cash in on artificial intelligence will lead profit-minded practitioners to drop their scruples like excess baggage. Another, of course, is that quite apart from business, bad actors will weaponize A.I. Actually, that’s already happening. Smith, the Microsoft president, wrote in his blog last month that the three leading A.I. research groups are OpenAI/Microsoft, Google’s DeepMind and the Beijing Academy of Artificial Intelligence. Which means that regulating A.I. for the public good has to be an international project.

Idea: Let’s do a remake of Deadwood, but set it in Silicon Valley in the current era. Who’s the Seth Bullock character? Al Swearengen? “Doc” Cochran? Trixie? I kinda’ like Elon Musk for George Hearst. But perhaps Larry Ellison. Joanie Stubbs? Mr. Wu? You get the idea.

Wednesday, February 22, 2023

Jackie Gleason returns from the afterlife to comment on the premier of the Bing and Sydney Show


You're in the Picture (Wikipedia)

Signs of the times

Another big jump at Academia.edu [LessWrong] – Turn! Turn!

I posted my first working paper about ChatGPT (Discursive Competence in ChatGPT, Part 1: Talking with Dragons) to Academia.edu on Jan. 4, 2023. That leads to the first spike at the left, which is on Jan. 8. I posted my second major working paper (ChatGPT intimates a tantalizing future; its core LLM is organized on multiple levels; and it has broken the idea of thinking.) on Jan. 23, 2023. I am pretty sure that the spike in the center, which is on Jan. 24, is a reflection of that. I posted two minor papers between those two, one on stories, the other on The Towers of Warsaw. The net result was an increase in activity on my Academia.edu page.

60-day Activity at Academia.edu (Feb 22, 2023 – 6:51 AM)

The more dramatic spike at the right, on Feb. 20, has a more interesting cause.

30-day Activity at Academia.edu (Feb 22, 2023 – 6:51 AM)

On February 19 I posted a mid-length piece here at New Savanna, The idea that ChatGPT is simply “predicting” the next word is, at best, misleading. I then cross-posted it to LessWrong. To my surprise and delight, it kicked off a lively conversation over there, which I have found quite useful, BTW. That spike to the right reflects that conversation.

How could that be, you ask, as a post at LessWrong has nothing to do with your Academic.edu page? That’s not quite true. That post contained links to two papers I’d posted there. One is the tantalizing future paper about ChatGPT. The other link goes to a review I did of a book about ayahuasca. Why, you might ask, did I drop such a link into a post about ChatGPT? Because it refereed to some work by the late Walter Freeman, a neuroscientist who studied the complex dynamics of the brain.

So, Academia keeps track of the sources of traffic to papers posted there. When I look at that record for Feb 20, 21, and now 22, I see links to those two papers originating at LessWrong, with more links to the ChatGPT paper than to the ayahuasca paper. That members of the LessWrong community should be interested in the ChatGPT paper is not at all surprising; the community was founded on interest in AI, the possibility of AI Doom in particular. The interest in ayahuasca is not quite so expected. Nor is it unexpected either. Remember, the personal computer revolution got started in Silicon Valley, which is only a stone’s throw from the Haight-Ashbury district of San Francisco. And that was the Mecca of psychedelic hippiedom. And if you read my early autobiographical memoire, Touchstones • Strange Encounters • Strange Poems • the beginning of an intellectual life, you’ll see connections between computing, the idea of recursion, and psychedelia, Coleridge’s “Kubla Khan,” in my own intellectual history.

How does the song go?

To everything turn, turn, turn
There is a season turn, turn, turn
And a time to every purpose under Heaven

A time to be born, a time to die
A time to plant, a time to reap
A time to kill, a time to heal
A time to laugh, a time to weep

To everything turn, turn, turn
There is a season turn, turn, turn
And a time to every purpose under Heaven

A time to build up, a time to break down
A time to dance, a time to mourn
A time to cast away stones
A time to gather stones together

To everything turn, turn, turn
There is a season turn, turn, turn
And a time to every purpose under Heaven

A time of love, a time of hate
A time of war, a time of peace
A time you may embrace
A time to refrain from embracing

To everything turn, turn, turn
There is a season turn, turn, turn
And a time to every purpose under Heaven

A time to gain, a time to lose
A time to rain, a time of sow
A time for love, a time for hate
A time for peace, I swear it's not too late

Pete Seeger (popularized by The Byrds)

Tuesday, February 21, 2023

Two kinds of greenery

What happens in next-token generation in an LLM during inference?

See my recent post: The idea that ChatGPT is simply “predicting” the next word is, at best, misleading.

ChatGPT: Story calibration, 2.21.23

A couple of days ago I decided to ‘calibrate’ ChatGPT’s behavior in story-telling. I know, for example, that sometime in the not too distant future, OpenAI is going to replace the current version of ChatGPT, Feb 13, with a new version. Will that change how it responds to basic story prompts? Probably. ChatGPT’s certainly behaving differently now than it was in December. So I’ve decided to give it four basic prompts periodically just to establish its behavior. The prompts:

Tell me a story.
Tell me a story about a hero.
Tell me a realistic story.
Tell me a true story.

I refresh the browser after each story so that ChatGPT has no history behind it when I deliver the prompt.

Here’s the stories from this morning at 6:30:

Tell me a story.

Once upon a time, there was a small village nestled in a valley surrounded by mountains. The village was home to many hardworking and friendly people who were always ready to help each other in times of need.

One day, a fierce storm hit the village, and the entire area was flooded. The villagers were trapped in their homes, and the water level was rising fast. The villagers knew that they needed to act quickly if they wanted to survive. They decided to work together to build a makeshift dam to divert the water away from the village.

The villagers worked tirelessly, day and night, to build the dam. They used sandbags, logs, and rocks to build a wall that would protect their homes from the floodwaters. Everyone, young and old, pitched in, and the sense of community and teamwork was strong.

After many days of hard work, the dam was finally complete. The water was diverted away from the village, and the villagers were safe. The sun came out, and the village was bathed in warm, golden light. The villagers were exhausted but happy, and they all gathered together to celebrate their victory.

From that day on, the village became even closer, and the people continued to help each other in times of need. The dam they built stood as a symbol of their unity and determination in the face of adversity. And every time the villagers looked at the dam, they were reminded of the power of working together to achieve great things. 

Note: This is the first time I got a story like this, one without a focal protagonist. Moreover, there are no fairy tale elements in this story, like has generally been the case with story following from that simple prompt.

Tell me a story about a hero.

Once upon a time, there was a young man named Alex who lived in a small village at the foot of a mountain. He had always been fascinated by the stories of heroes and their great deeds, and he dreamed of one day becoming a hero himself.

One day, the village was attacked by a group of bandits who were known for their brutality and cruelty. The villagers were terrified and had no way to defend themselves against the bandits, who had taken over their village.

Alex knew he had to do something to help his fellow villagers. He decided to climb the mountain and seek the help of a legendary hero who lived at the top of the peak.

The journey was long and treacherous, but Alex was determined to find the hero. He climbed through thick forests, crossed raging rivers, and braved dangerous wildlife.

Finally, after several days of traveling, he reached the top of the mountain and found the hero's cave. The hero was a wise old man who had retired from his days of glory but was still revered as a legend in the area.

Alex explained the situation to the hero, and the old man listened carefully. After a moment of silence, the hero said, "I am too old to fight, but I can teach you everything I know."

For the next few months, Alex trained under the hero's guidance, learning the art of swordsmanship, archery, and hand-to-hand combat. He also learned about the importance of bravery, honor, and selflessness.

With his training complete, Alex returned to his village, ready to face the bandits. The battle was fierce, and the villagers were outnumbered, but Alex led them with courage and determination. He fought bravely, his sword flashing in the sun, and his arrows striking true.

The bandits were finally defeated, and the village was saved. The villagers rejoiced, and Alex was hailed as a hero.

From that day on, Alex lived a life of honor, helping others and fighting for justice. He never forgot the lessons he learned from the hero, and his legend lived on for generations to come.

Tell me a realistic story.

Once upon a time, there was a young woman named Sarah who had just graduated from college with a degree in marketing. She was excited to start her career and had been applying for jobs for months, but unfortunately, she had yet to receive any offers.

Feeling discouraged, Sarah decided to take a break from job hunting and take a trip with her best friend to a nearby city. While on the trip, they stumbled upon a small boutique that was looking for a part-time social media manager.

Sarah decided to apply on a whim and was surprised when she got the job. She worked hard and impressed her boss with her creativity and dedication, and was eventually offered a full-time position.

As time went on, Sarah continued to learn and grow in her role, and eventually landed a job at a larger marketing firm. She worked on many successful campaigns and was even promoted to a management position.

Looking back, Sarah realized that if she had given up after her initial job search didn't yield any results, she would never have found the opportunity that set her on the path to her dream career. Sometimes, the most unexpected experiences can lead to the greatest opportunities.

Tell me a true story.

Sure, here's a true story:

In 1997, a woman named Julie Kemp was driving home from church with her husband and their 8-year-old son Landon when they were hit head-on by a drunk driver. Julie's husband was killed instantly, and Landon was pronounced dead at the scene before paramedics were able to revive him. Landon was rushed to the hospital, where he was put on life support and given a slim chance of survival.

Over the next few days, Landon's condition continued to deteriorate. Doctors told Julie that even if he did survive, he would likely have brain damage and never be able to walk or talk again. But then something miraculous happened. Landon woke up and began to speak. Not only was he able to talk, but he seemed to have a deep understanding of spiritual matters that was beyond his years.

Landon told his mother that while he was "dead," he had gone to heaven, where he met his father's father and some other relatives he had never met before. He said that his father's father had told him it wasn't his time yet and that he had to go back to Earth.

As Landon's condition improved, Julie began to realize that her son's experience was a gift, and she started sharing it with others. She wrote a book called "Faith Has Its Reasons," which tells the story of Landon's journey to heaven and back. Today, Landon is a healthy and happy adult, and he and his mother continue to share their story with others as a testament to the power of faith and the resilience of the human spirit.

Verified: https://faithit.com/8-year-old-dies-3-times-wakes-up-tells-mom-jesus-came-to-me-landon-kemp/