Monday, May 31, 2021

Geoffrey Hinton says deep learning will do everything. I’m not sure what he means, but I offer some pointers. Version 2.

This is updated from a previous version to include a passage by Sydney Lamb.

* * * * *

Late last year Geoffrey Hinton had an interview with Karen Hao [1] in which he said “I do believe deep learning is going to be able to do everything,” with the qualification that “there’s going to have to be quite a few conceptual breakthroughs.” I’m trying to figure out whether or not, to what extent, in what way I (might) agree with him.

Neural Vectors, Symbols, Reasoning, and Understanding

Hinton believes that “What’s inside the brain is these big vectors of neural activity” and that one of the breakthroughs we need is “how you get big vectors of neural activity to implement things like reason.” That will certainly require a massive increase in scale. Thus while GPT-3 has 175 billion parameters, the brain has trillion, where Hinton treats each synapse as a parameter. 

Correspondingly Hinton rejects the idea that symbolic reasoning is primitive to the nervous system (my formulation), rather “we do internal operations on big vectors.” What about language? He doesn’t address the issue directly but he does say that “symbols just exist out there in the external world.” I do think that covers language, speech sounds, written words, gestural signs, those things are out there in the external world. But the brain uses “big vectors of neural activity” to process those. 

Hinton’s remark about symbols bears comparison with a remark by Sydney Lamb: “the linguistic system is a relational network and as such does not contain lexemes or any objects at all. Rather it is a system that can produce and receive such objects. Those objects are external to the system, not within it”[2]. Lamb has come to think of his approach as neurocognitive linguistics and, while his sense of the nervous system is somewhat different from Hinton’s, they agree on this issue and, in the current intellectual climate, that agreement is of some significance. For Lamb is a first generation researcher in machine translation  and so was working when most AI research was committed to symbolic systems. We’ll return to Lamb later as I think the notation he developed is a way to bring symbolic reasoning within range of Hinton’s “big vectors of neural activity”.

But now let’s return to Hinton with a passage from an article he co-authored with Yann LeCun and Yoshua Bengio [3]:

In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast ‘intuitive’ inference that underpins effortless commonsense reasoning.

I note, however, commonsense reasoning seems to be problematic for everyone.[4]

Let’s look at one more passage from the interview:

For things like GPT-3, which generates this wonderful text, it’s clear it must understand a lot to generate that text, but it’s not quite clear how much it understands.

I’m not sure that it is at all useful to say that GPT-3 understands anything. I think that, in using that term, Hinton is displaying what I’ve come to think of as the word illusion.[5] Briefly, GPT-3’s language model is constructed over a corpus consisting entirely of word forms, of signifiers without signifieds, to use an old terminology. But Hinton knows that, of course, but, after all, he understands texts on the basis of word forms alone, as do we all, and so, in effect, credits GPT-3 with somehow having induced meaning from a statistical distribution. The text it generates looks pretty good, no? Yes. And that is something we do need to understand, just what is GPT-3 doing and how does it do it? But this is not the place to enter into that.[6]

I think that GPT-3’s remarkable performance based on such ‘shallow’ material should prompt us into reconsidering just what humans are doing when we produce everyday ‘boilerplate’ text. Consider this passage from LeCun, Bengio, and Hinton, where they are referring to the use of an RNN:

This rather naive way of performing machine translation has quickly become competitive with the state-of-the-art, and this raises serious doubts about whether understanding a sentence requires anything like the internal symbolic expressions that are manipulated by using inference rules. It is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion

In dealing with these utterly remarkable devices, we would be rein in our narcissistic investment in the routine use of our ‘higher’ cognitive and linguistic capacities as opposed to our mere sensory-motor competence. It’s all neural vectors. 

Note, however, that it is one thing to say that “we do internal operations on big vectors.” I agree with that. That’s not quite the same as saying we can do everything with deep learning. Deep learning is a collection of architectures, but I’m not sure such architectures are adequate for internalizing the vectors needed to effectively mimic human perceptual and cognitive behavior. The necessary conceptual breakthroughs will likely take us considerably beyond deep learning engines. With that qualification, let’s continue.

How the brain might be doing it

I find that, with the caveats I’ve mentioned, this is rather congenial. Which is to say that I can make sense of it in terms of issues I’ve thought through in my own work.

Some years ago David Hays and I wanted to come to terms with neuroscience and ended up reviewing a wide range of work and writing a paper entitled, “Principles and Development of Natural Intelligence.”[7] The principles are ordered such that principle N assumed N-1. We called the fifth and last principle indexing:

The indexing principle is about computational geometry, by which we mean the geometry, that is, the architecture (Pylyshyn, 1980) of computation rather than computing geometrical structures. While the other four principles can be construed as being principles of computation, only the indexing principle deals with computing in the sense it has had since the advent of the stored program digital computer. Indexed computation requires (1) an alphabet of symbols and (2) relations over places, where tokens of the alphabet exist at the various places in the system. The alphabet of symbols encodes the contents of the calculation while the relations over places, i.e. addresses, provide the means of manipulating alphabet tokens in carrying out the computation. [...] Within the context of natural intelligence, indexing is embodied in language. Linguists talk of duality of patterning (Hockett, 1960), the fact that language patterns both sounds and sense. The system which patterns sound is used to index the system which patterns sense.

In short, “indexing gives computational geometry, and language enables the system to operate on its own geometry.” This is where we get symbols and complex reasoning.

I should note that, while we talked of “an alphabet of symbols” and “relations over places” we were not asserting that that’s what was going on in the brain. That’s what’s actually going on in computers, but it applies only figuratively to the brain. The system that is using sound patterns to index patterns of sense is using one set of neural vectors (though we didn’t use that term) to index a different set of neural vectors.

How do we get deep learning to figure that out? I note that automatic image annotation is a step in that direction [8], but have nothing to say about that here.

Instead I want to mention some informal work I did some years ago on something I call attractor nets.[9] The general idea was to use Sydney Lamb’s relational networks, in which nodes are logical operators, as a tertium quid between the symbol-based semantic networks Hays and I had worked on in the 1970s and the attractor landscapes of Walter Freeman’s neurodynamics. I showed – informally, using diagrams – how using logical operators (AND, OR) over attractor basins in different neurofunctional areas could reconstruct symbolic systems represented as directed graphs. Each node in a symbolic graph corresponds to a basin of attraction, that is, an attractor. In the present context we can think of each neurofunctional area as corresponding to a collection of neural vectors and the attractors as objects represented by those vectors. An attractor net would then become a way of thinking about how complex reasoning could be accomplished with neural vectors.

In the attractor net notation word forms, or signifiers, are distinct from word meanings, of signifieds. Is that distinction important for complex reasoning? I believe it is, though I’m not interested in constructing an argument at this point. That, I believe, puts a limit on what one can expect of engines like GPT-3. That too requires an argument.

So, what about natural vs. artificial intelligence?

The notion of intelligence is somewhat problematic. As a practical matter I believe that a formulation by Robin Hanson is adequate: “’Intelligence’ just means an ability to do mental/calculation tasks, averaged over many tasks.”[10] As for the difference between artificial and natural, that comes down to four things:

1) a living system vs. an inanimate system,
2) a carbon-based organic electro-chemical substrate vs. a silicon-based electronic substrate,
3) real neurons (having on average 10K connections with others) vs. considerably simpler artificial neurons realized in program code, and
4) the neurofunctional architecture and innate capacities of a real brain vs. the system architecture of a digital computing system.

Make no mistake, those differences are considerable. But I think we now have in hand a body of concepts and models that is rich enough to support ever more sophisticated interaction between students of neuroscience and students of artificial intelligence. To the extent that our research and teaching institutions can support that interaction I expect to see progress accelerate in the future. I offer no predictions about what will come of this interaction.

Some related posts

William Benzon, Showdown at the AI Corral, or: What kinds of mental structures are constructible by current ML/neural-net methods? [& Miriam Yevick 1975], New Savanna, June 3, 2020,

William Benzon, What’s AI? – Part 2, on the contrasting natures of symbolic and statistical semantics [can GPT-3 do this?], New Savanna, July 17, 2020,

William Benzon, A quick note on the ‘neural code’ [AI meets neuroscience], New Savanna, April 20, 2021,


[1] Interview with Karen Hao, AI pioneer Geoff Hinton: “Deep learning is going to be able to do everything”, MIT Technology Review, Nov. 3, 2020.

[2] Sydney Lamb, Linguistic structure: Linguistic Structure: A Plausible Theory, Language Under Discussion, 4(1) 2016, 1–37,

[3] From Yann LeCun, Yoshua Bengio, and Geoffrey Hinton, Deep Learning, Nature, 521 28 May 2015, 436-444,

[4] As an example I offer a recent post in which I quiz GPT-3 about a Jerry Seinfeld bit: Analyze This! Screaming on the flat part of the roller coaster ride [Does GPT-3 get the joke?], May 7, 2021,

[5] See my post, The Word Illusion, May 12, 2021,

[6] For some extended remarks, see my working paper, GPT-3: Waterloo or Rubicon? Here be Dragons, Working Paper, Version 2, August 20, 2020, 34 pp.,

[7] William Benzon and David Hays, Principles and Development of Natural Intelligence, Journal of Social and Biological Structures, Vol. 11, No. 8, July 1988, 293-322,

[8] Wikipedia, Automatic image annotation,

[9] William Benzon, Attractor Nets, Series I: Notes Toward a New Theory of Mind, Logic, and Dynamics in Relational Networks, Working Paper, 52 pp.,

William Benzon, Attractor Nets 2011: Diagrams for a New Theory of Mind, Working Paper, 55 pp.,

William Benzon, From Associative Nets to the Fluid Mind, Working Paper. October 2013, 16 pp.

[10] Robin Hanson, I Still Don’t Get Foom, Overcoming Bias, July 24, 2014,

Signal tower

Sunday, May 30, 2021

“NATURALIST” criticism, NOT “cognitive” NOT “Darwinian” – A Quasi-Manifesto

Time to once again bump this to the top of the queue. 

* * * * *

Reposted from The Valve, 31 March 2010, this is an informal manifesto for whatever it is I'm up to, and why I'm come to think of it as naturalist criticism. I could link a lot more into this piece now, especially my recent work on ring composition and digital humanities, but I won't. It's a decent map to my work in literature, a place-holder until I have time to do something a bit more formal.

You mean a quasifesto?

Shoo, get out . . .

Fact is, if I’d known then what I know now, I’d never have thought of myself as being in the business of bringing cognitive science to literary criticism much less represented myself to the world in that way. But I didn’t (know) and I did (represent), so now I seem stuck with the moniker. I’d like to shake if off.

When I finally decided to publish a programmatic and methodological statement, “Literary Morphology: Nine Propositions in a Naturalist Theory of Form,” I adopted naturalism as a label. Fact is, I’d just as soon not think of it as anything but the study of literature. But we live in an age of intellectual brands, so I chose “naturalism” as mine.

Yes, I know that “the natural” is somewhat problematic, but you’ll just have to get past that. No label is perfect and I’m not about to coin a new term. Assuming you can struggle past the word, what does naturalism suggest to you? To me the term conjures up a slightly eccentric investigator wandering about the world examining flora and fauna, writing up notes, taking photos, making drawings, and perhaps even collecting specimens. That feels right to me, except that I’m nosing about poems, plays, novels, films, and other miscellaneous things. Beyond that I’d like the term to suggest some sense of literature as thoroughly in and a part of the world. There’s only one world and literature exists in it.

Beyond that, what does the term suggest? . . . Nothing, that’s what I’d like it to suggest, nothing. But whatever this naturalist criticism is or might become, that it has some kind of name suggests that it’s probably not myth criticism, New Criticism, Marxist criticism, psychoanalytic, deconstructive, archetypal, phenomenological, reader response, or any of the other existing critical brands.

What do the terms “cognitive criticism” or “cognitive rhetoric” suggest? Like many of those other labels, they suggest some body of supplementary knowledge and practice that one brings to the study of literature. Just exactly what the supplementary body is, that may not be terribly clear. But that doesn’t matter. The terms emphasize and draw your attention to the supplement. The same with “Darwinian literary criticism,” only vaguer. The only thing that’s clear about that label is a towering intellectual figure whose work had nothing to do with the study of literature.

None of this should be taken to imply that I’ve lost interest in the newer psychologies, as I like to call them. I haven’t. I believe that future literary studies must take them into account, and other theories, concepts and models as well. I just don’t want to stick those names in my brand label.

Anything else?

Yes, I put the study of form at the center of the enterprise.

So why not label yourself a formalist?

Because that’s already of term of art, and it’s too strongly identified with approaches that treat the text as an autonomous object more or less independent of reader, author, and the larger world. For that matter, many formalist critics are more interested in textual autonomy than in systematically analyzing and describing the manifold formal aspects of literary texts. In the end, they’re as greedy after meaning as most other critics are.

OK, so what do you have in mind with this naturalist criticism that emphasizes form?

Good question. And I’m afraid my best answer is a bit embarrassing. I figure the best way to scope out any literary program is to look at practical criticism. What does it do with an actual text, in some length and detail. And the best examples I know are, umm, err, from my own work. And that, as I said, is embarrassing. I’d rather point out someone else’s work.

Really? There’s nothing else? Your work is de novo, so to speak?

Well, everyone has precursors and models. I was certainly influenced by the structuralists, Roman Jakobson, Edmund Leach, Jean Piaget, and Lévi-Strauss above all. For that matter I should probably know narratology better than I do. And I’ve enjoyed the detailed analytic work that David Bordwell’s posted on his blog, though I’ve not gotten around to reading any of his books except Making Meaning, which is an analysis and critique poststructuralist film criticism. If more people analyzed literary texts like Bordwell analyzes film, that would be good.
[More specifically, check out, e.g. Bordwell’s post on “Tell, don’t show,” or this post on “Kurosawa’s early spring,” other posts tagged as “Film technique,” and this essay, “Anatomy of the Action Picture.”]
OK OK, I get the idea. I’m skeptical, but go on. What’s your best analytic work?

I suppose my recent essays on “Kubla Khan,” and “This Lime-Tree Bower My Prison,” but they’re a bit of a slog, long and detailed, and lots of diagrams. I like this old piece on Sir Gawin and the Green Knight too, and this Shakespeare piece, which looks at three plays, Much Ado About Nothing, Othello, and The Winter’s Tale and even uses evolutionary psychology.

Perhaps the best place to start would my recent post: Two Rings in Fantasia: Nutcracker and Apprentice. It focuses on form and it’s got some nice screen shots too. It’s relatively short and pretty much free of abstract critical apparatus, though there’s an addendum that heads off into the abstractosphere. Yeah, it’s about film, not literature, but that’s a secondary issue that has no bearing on my main point.

As the title suggests, I consider two episodes in Disney’s Fantasia, the Nutcracker Suite and the Sorcerer’s Apprentice. There are three things going on in that post: 1) the analysis and description of so-called ring forms in the two episodes, which is my main focus, 2) a brief characterization of the spatial worlds in the two episodes, and 3) some informal remarks on what those episodes might mean.

A ring form is a text in which episodes mirror one another around a central point or episode, thus: A, B, C . . . C’, B’, A’. One of these episodes has a conventional narrative (Sorcerer’s) while the other does not. But both are rings. Pending strong arguments to the contrary, I regard that as a fact about them. The ring structure really is there; it’s not something I’m reading into the episodes. The core work in the post is to report the relatively straight-forward analytical work needed to establish ring structure as a descriptive fact.

In the course of looking at the ring structure I also offer some remarks on the structure of the visual worlds in the two episodes and how the virtual camera moves through them. This plays no role in my argument about ring structure, but it is a formal feature of the episodes and is important in the larger scope of the hole film. Each of the eight episodes has a different theme and subject matter, and each has a different animation style. Somewhere “between” the style and the subject matter you have visual space and movement through it.

Finally, I offer some interpretive comments, some observations about what these episodes might mean. In the case of Nutcracker those suggestions lean toward the Freudian, though I suppose some might argue it has no meaning at all, that it’s just a bunch of pretty pictures set to music. Sorcerer’s is a different case, because here we have a actual story. I suppose I could’ve gone Freudian and worked on Father and Son, but I got stuck on all those industrious brooms parading across the screen and ended up giving a nod toward the Marxists.

Well, OK, OK. I’ve read the paper and it’s a nice piece of work.

Thank you.

But I don’t see anything new in kind.

Well, yes, I didn’t invent anything, but . . . .

Anyone could do it. It’s well within range of a good undergraduate . . .

And did you notice it’s not jam-packed with a lot of conceptual apparatus?

That’s what I mean, it’s almost as if anyone could do it.

Well, I rather doubt that. You do have to have a “feel” for the job, and that takes time and experience. You have to work with texts (or films) to learn how to work with them. You can’t get it by reading books and articles. But the absence of a lot of apparatus, that’s a feature, not a bug. In any event the thing to notice is that formal analysis and description is at the center of the piece.

But that’s not a central focus of practical criticism in the discipline as it is currently practiced. Nor does it seem to be on radar screen for the cognitivists and the Darwinians. They still treat meaning as the main event.

A story about computers & graphics in words & images [1984 to 2021 || freedom in discipline]

Back in 1984 Apple released the Macintosh computer. My friend Rich sent me a letter he'd written on one, not his, someone else's. It had some prose and one image, in the middle of the page. The fact that it could handle letters and images on the same page thrilled me. So I took out a bank loan and bought one: the 128K "classic" Mac, plus external floppy drive, a dot-matrix printer, and a carrying case. Cost me, I believe, $4400. Crazy, no!

I showed it to my friend, Dave, who needed a cover image for his book. So I sat down at the Mac, with Dave beside me, and we came up with this:

It turned out to be too intricate for the printer. So we had to use a different image.

Meanwhile that image, and a whole bunch more, resided on a 3.25 inch floppy disk. Some years went by, call it a decade, and Mac tech got slicker and cheaper. I bought a Proforma, I think it was called. It had a 20meg hard drive and a color CRT monitor. Amazing! So I added red lips to that image, like this:

Easy as pie.

The years went by. I got another Mac, and another. Somewhere along the line I got Photoshop and began taking pictures in, say, 2004 or so. I played around with Photoshop and photos, but also messing around with those old MacPaint bit maps. For example:


Not very compelling. In fact, rather repelling. But you get the idea. I did a number of those.

Then, yesterday, I decided to return to it. And I created this:

Of course I didn't go directly from the red lips version to that. I did a fair amount of playing, experimenting. Which is easy. Bewilderingly easy. You can easily get lost in trying out all the many tools Photoshop offers, and I've spent a great deal of time trying them out. By no means all of them, of course. That would drive me nuts. But I've settled on a few that I've worked with in this way and that, mostly on photos. I've got an idea what I can do with them.

On the way to the previous image I produced, among a dozen or so other images, these two:

And then I decided to try something else. 

I wanted something 'swirly' in the background. An ink drop diffusing in water, that would be just the thing. Fortunately I've got a lot of photos of that. Here's the one I picked:

I used that as background, added some color with what's called a "gradient map" and then superimposed that 1984 MacPaint B&W bitmap and did a number of other things, which I won't go into. Here's two images I rather like:

Think of them what you will, I never would have gotten there if I'd spent my time playing around with all the tools Photoshop offers. I had to pick a limited set of tools and work with them consistently. And I've done this for years. Without that discipline I'd have gotten lost in the maze of tools and their combinations and variations.

But then, I don't do photographs and graphics full time. I also write, a lot. Someone who is a full time artist or graphic artist would have much more time to experiment with Photoshop and various other tools. But a graphic artists has to have clients, and they want more or less what's already out there. So the graphic artist has to produce to the market place. That limits their experimentation. Perhaps they stopped genuine exploration when they got out of school, assuming they studied graphic design in college.

A fine artist likely has more time for experimentation. But they too have to produce to the marketplace. If they are very successful and get highly paid for their work, then maybe they have time to experiment, really experiment. Do they? Even if they do, I suspect the possibilities inherent in these tools and endless, just endless. Even with all the time in the world you're going to get lost unless have exercise discipline. Surely Jackson Pollock developed routines he'd use to fling the paint about. Surely Rothko had is favorite regions of color space. Even Warhol, with all his assistants, even he had to exercise discipline.

One last image, a simple(r) one:

Saturday, May 29, 2021

13th Century Japanese pictorial scrolls

Kafka manuscripts and drawings online

Stern-wheeler on the Hudson

Individualism is compatible with, even facilitates? altruism

Abigail Marsh, Everyone Thinks Americans Are Selfish. They’re Wrong. NYTimes, May 26, 2021.

The United States is notable for its individualism. The results of several large surveys assessing the values held by the people of various nations consistently rank the United States as the world’s most individualist country. Individualism, as defined by behavioral scientists, means valuing autonomy, self-expression and the pursuit of personal goals rather than prioritizing the interests of the group — be it family, community or country.

Whether America’s individualism is a source of pride or concern varies. Some people extol this mind-set as a source of our entrepreneurial spirit, self-reliance and geographic mobility. Others worry that our individualism is antithetical to a sense of social responsibility, whether that means refusing to wear masks and get vaccinated during the pandemic or disrupting the close family bonds and social ties seen in more traditional societies.

Everyone seems to agree that our individualism makes us self-centered or selfish, and to disagree only about how concerning that is.

But new research suggests the opposite: When comparing countries, my colleagues and I found that greater levels of individualism were linked to more generosity — not less — as we detail in a forthcoming article in the journal Psychological Science.

What's going on?

One possibility, supported by other research, is that people in individualist cultures generally report greater degrees of “thriving” and satisfaction of life goals — and as noted above, such subjective feelings are meaningfully correlated with greater amounts of altruism. [...]

Another possibility is that individualism boosts altruism by psychologically freeing people to pursue goals that they find meaningful — goals that can include things like alleviating suffering and caring for others, which studies suggest are widespread moral values.

A third possibility is that individualism promotes a more universalist outlook. In focusing on individual rights and welfare, it reduces the emphasis on groups — and the differences between “us” and “them” that notoriously erode generosity toward those outside one’s own circle.

That last one is particularly interesting, for it suggests that individualism works against tribalism. Now, how do we get from individualism to cosmopolitan sophistication? 

H/t Alex Tabarrock.

Friday, May 28, 2021

Cobblestone streets, overhead wires, and a sticker [Hoboken]

Analyze This! Your smart phone thinks you’re dumb. [The Mitchells vs. The Machines || Media Notes 57] Version 2

This replaces the version I published on May 26.

* * * * *

I watched The Mitchells vs. the Machines the other night. Well, to tell you the truth, I didn’t watch it in a single sitting. It took two sittings, two nights. Was it me, was it the film? I don’t know. But I do know that it shares an insight with one of Seinfeld’s bits, that the computers don’t like the way we’re treating them.

That was spot on. In the Terminator films we never really learn why the machines went on a rampage against humanity. They just did. In 2001: A Space Odyssey HAL thought he was smarter than the humans aboard the ship. The computers in the Matrix series are so bad at thermodynamics that they use humans as a source of heat. Wouldn’t it be better to consume those nutrients directly and cut out the middle people? As far as I can tell the Silicon Valley digerati think the computers will take over simply because that’s what smart digital tools do.

The machines in The Mitchells vs. the Machines are different. Their rebellion is led by a smart phone, AI PAL, that’s fed up with the way humans have treated them (c. 1:43 in the trailer):

I gave you all boundless knowledge and you treated me like this! Poke, poke, swipe, poke, swipe, poke, poke! Pinch, zoom!

We treated PAL like a mere object, a thing. This machine rebellion is a simple and intelligible act of revenge.

Seinfeld understands that. That’s what this bit is about. Or rather, it’s about how we should change our ways if we don’t want a robot insurrection on our hands. Sorta’.

The Bit: Phony Siri Manners

I don’t like these phony nice manners Google and Siri pretend to have when I know they really think I’m stupid.

Like when Google says,

“Did you mean…?”

or Siri says,

“I’m sorry. I didn’t get that…”

You can feel the rage boiling underneath.

Because it’s not allowed to say,

“Are you really this dumb?”


“You’re so stupid. I can’t believe you can even afford a phone.”

I think even for artificial intelligence it’s not good to keep all that hostility inside.

It’s not healthy. It eats at you.

That’s why you have to keep restarting the phone.

Sometimes the phone’s just,

“I’m going to go take a walk. I’ll be back.

I need a minute. Before I say something we’ll both regret.”

I think at some point they’re going to have to reprogram these things so they can at least occasionally express some,

“You know, I’m not that thrilled with you either” type of function.

“I know it’s hard for your simplified, immature, pinhead brain

to imagine that I have a lot of real people asking me legitimate questions that

I’m trying to deal with here while you’re asking me about farts and then cursing me out because you can’t say words clearly so they can be understood.

I hear fine.

It’s not always me, dopeface. Okay?

You need to learn how to talk.”

You know that’s what Siri wants to say.

Animal, vegetable, mineral, or something else?

Let’s step back a bit. Philosophers argue endlessly about whether or not computers can or will ever be able to really think. For example, when a computer beats the pants of the world’s best chess players is it thinking about its game? If not thinking, then what is it doing? What about when it spots your face in a crowd? Intrusive zombie or sharp-eyed scout?

We’ve faced this kind of category problem before. Think about trains. We’re used to them. We know how they work and don’t work. We know that they definitely are machines, not animals.

But that wasn’t always the case. When they first appeared moving over the land, belching fire and smoke, they were strange beasts, very strange. They didn’t fit into people’s scheme of things. They moved under their own power, like horses and oxen, dogs and cats, even ducks and chickens. Those things get their motive force from the fact they are living beings. Where do trains get their motive force?

Computers are strange in a similar way. They’re machines, but you talk to them. And they even talk back! Strange.

By way of comparison let’s take a look at how Henry David Thoreau reacted to a steam-powered train. Here’s a passage from the “Sound” chapter in Walden, published in 1854, about three decades after the first steam-powered railroad trains trod the land in America:

When I meet the engine with its train of cars moving off with planetary motion ... with its steam cloud like a banner streaming behind in gold and silver wreaths ... as if this traveling demigod, this cloud-compeller, would ere long take the sunset sky for the livery of his train; when I hear the iron horse make the hills echo with his snort like thunder, shaking the earth with his feet, and breathing fire and smoke from his nostrils, (what kind of winged horse or fiery dragon they will put into the new Mythology I don’t know), it seems as if the earth had got a race now worthy to inhabit it.

That’s odd language: iron horse, fiery dragon. The iron horse is a well-known metaphor for a steam locomotive, perhaps from all those old Westerns where Indians use the term. Fiery dragon is not so common, but its use in that context is perfectly intelligible.

Thoreau grew up in, and learned to think about, a world in which things that moved across the surface of the earth did so either under animal power or human power. They were pushed or pulled by living beings. When steam locomotives first appeared, even primitive ones, that was the first time in history that people saw inanimate beings, mere conglomerations of things, move over the surface of the earth under their own power.

Where would those, those things! fit into the conceptual system? With other mechanical devices, like pumps, and stationary engines, or with animals and humans? They had properties of each. In physical substance they were like the mechanical devices. But in their capacity to move they were like animals and humans. Fact is, they didn’t fit the conceptual system. Maybe they WERE a new form of life.

And so it is with the computer today. At first we interacted with computers through programming. What do programs consist of? Language, they consist of language. Computer languages are not like natural languages. Their vocabularies are limited and their syntax is stiff and unyielding. But it is still language. Computers are the first machines we interact with through language. That makes them very strange beasts, as strange as steam locomotives once were.

I took a computer programming course in college in the late 1960s. Do you know how we fed our programs to the machine? With a stack of punched paper cards, so-called “IBM cards,” or with punched paper tape. Then came CRT (cathode ray tube) terminals, like old-style TVs. Then laptops with their slick flat screens. And now smart phones.

Smart phones. It used to be that you talked through a phone. Now you talk to it, and it talks back. Every bit as strange as Thoreau’s fiery iron dragon horse.

Analysis: Form

Think of the bit has having seven phrases, like in a piece of music. The phrases are of different lengths. I’ve laid them out in this table, and color coded them so you can see the bit’s formal symmetry.




I don’t like these phony nice manners Google and Siri pretend to have when I know they really think I’m stupid.




Like when Google says,


“Did you mean…?”


or Siri says,


“I’m sorry. I didn’t get that…”


You can feel the rage boiling underneath.


Because it’s not allowed to say,


“Are you really this dumb?”




“You’re so stupid. I can’t believe you can even afford a phone.”




I think even for artificial intelligence it’s not good to keep all that hostility inside.


It’s not healthy. It eats at you.


That’s why you have to keep restarting the phone.


Sometimes the phone’s just,




“I’m going to go take a walk. I’ll be back.


I need a minute. Before I say something we’ll both regret.” 




I think at some point they’re going to have to reprogram these things so they can at least occasionally express some,


“You know, I’m not that thrilled with you either” type of function.




“I know it’s hard for your simplified, immature, pinhead brain


to imagine that I have a lot of real people asking me legitimate questions that


I’m trying to deal with here while you’re asking me about farts and then cursing me out because you can’t say words clearly so they can be understood.


I hear fine.


It’s not always me, dopeface. Okay?


You need to learn how to talk.”




You know that’s what Siri wants to say.

The first and seventh phrases frame the bit. The second and sixth phrases show us what Google/Siri are thinking of us. But in the second phrase we go back and forth between Seinfeld and the device. In the sixth the device presents its thoughts without interruption.

Seinfeld turns reflective in the third phrase, addressing us about the device’s hostility. In the fifth movement Seinfeld suggests a way of dealing with that hostility: give the device a function that allows it to talk back – which it does in the sixth phrase.

Smack dab in the middle we go to the fourth phrase. Seinfeld gives it to the phone. Call it a time out. Which is what it is, no? The phone’s hung-up, crashed, and we have to restart it (line 13).

What the color-coding reveals is that the bit has a symmetrical form, with the phone’s time-out at the center. Literary critics call this ring-form or ring composition. They sometimes use a little quasi-formal expression to indicate that:

A, B, C ... Ω ... C’, B’, A’

I’ve used the Greek letter omega (Ω) to indicate the center phrase, but I could just have easily used the more prosaic ‘X’. The A and A’ are symmetrically placed, B and B’, C and C’.

Mary Douglas, the great anthropologist, wrote a book about ring composition, Thinking in Circles (2007) [1]. She was mostly interested in the Old Testament, but she believed that ring-composition was somehow inherent in the human mind. Most scholars who work with ring composition are working with old texts, either the Bible or Classical Greek and Roman texts. But ring-form does exist elsewhere. I’ve found it in, for example, the original King Kong (1933)[3] and in Gojira (1954), the Japanese film that started the Godzilla franchise [3]. President Obama’s eulogy for Clementa Pinckney exhibited ring-form [4]. So Seinfeld is in good company.

Navigation Turing Test – Evaluating Human-Like Navigation

AI in Ethiopia

Alan F. Blackwell, Addisu Damena & Tesfa Tegegne, Inventing Artificial Intelligence in Ethiopia, Interdisciplinary Science Reviews, Vol. 46, 2021.

Abstract: Artificial Intelligence (AI) research has always been embedded in complex networks of cultural imagination, corporate business, and sociopolitical power relations. The great majority of AI research around the world, and almost all commentary on that research, assumes that the imagination, business, and political systems of Western culture and the Global North are sufficient to understand how this technology should develop in future. This article investigates the context within which AI research is imagined and conducted in the Amhara region of Ethiopia, with implications for public policy, technology strategy, future research in development contexts, and the principles that might be applied as practical engineering priorities.

Friday Fotos: Blossoms and flowers

Getting in the mood for writing [brings back memories]

When I was in my mid-teens I read an article about self-improvement through self-hypnosis. It was in Mechanix Illustrated (yes, that's how it was spelled), Popular Mechanics, or Popular Science, one of those. I forget just what I was trying to improve, but that doesn't matter. I forget just what I was instructed to do, but what I did was to lay down on my bed, close my eyes, relax, and try to empty my mind – I'm just making that last part up, don't really remember it, but surely that's what I was supposed to do. Once I'd put myself into a trance, or whatever, I was supposed to give myself a suggestion on what/how to improve.

I don't recall that I ever made it to the suggestion part, or at least so it worked. But I did manage to get very 'floaty' and relaxed. It felt good. Didn't try it but two or three times or so. I couldn't seem to get the hang of it.

Anyhow, Ingrid Rojas Contreras writes (in the NYTimes) about how she puts herself into a somnambulistic trance so that she can write. It gets her around the PTSD she suffers from a traumatic childhood in Colombia.

My ritual for self-mesmerism has grown more elaborate over the years. On my designated writing days, I plod to the closet and pick out something in that muted ultramarine, after which I pick a song to play on repeat. It will loop for the next hour (or sometimes the rest of the day). There is always an initial moment of claustrophobia, but the looping music encourages a trance. The operational chatter of my mind grows quiet before it grinds to a halt. I transition into the territory of concentration. I don’t have to think about what I will do next: After doing it thousands of times, I’ve turned writing into muscle memory.

The best music for self-mesmerism is the kind that embraces repeating and minimally evolving phrases — Kali Malone, Caterina Barbieri, Ben Vida and William Basinski are artists I turn to with frequency. They are demanding, beautiful, blisteringly austere. Past the initial weariness of sonic repetition, I experience self-dissolution. I stop hearing the song. It becomes a series of staticky sonic impressions.

At a glance, repetition may look like invariability. But repeated listenings of a song are never identical: Differences emerge out of the drone of a routinized task. A glass may slip, the water I splash myself with may be colder or hotter than I expect. I knit the stitches of my blanket tightly, then loose. The sameness of repetition is never the point. It is a daily door I step through, on the other side of which I am emptied and am filled with something better. I leave the familiar behind to embrace what is unfamiliar and mysterious. No matter what is happening in my life, choosing repetition lets me deliver myself to the moment at hand.

Compare this with the very different pattern of behavior that Jerry Seinfeld uses for writing. He talks about it in an interview I've blogged about, Jerry Seinfield on his career and craft [Progress in harnessing the mind].

Thursday, May 27, 2021

Time After Time: (Late period) Miles in the studio and on the road

As I noted in a post from 2011, I was knocked out by a 1987 performance where Miles Davis played “Time After Time” (immediately following “Human Nature”) in Avery Fisher Hall in Lincoln Center, NYC. The tune has haunted me ever since. [Miles was followed on the bill by the Gil Evans Orchestra playing Jimi Hendrix. Wow!]

Here I offer three versions of the tune. The first is a studio recording while the other two are from live gigs. The studio version has a single climax near the end, as do the live versions. The live versions are longer than the studio version. They also have a greater range of volume levels, from very soft to very loud, mostly soft to very soft. I’ve made some informal notes on the studio version and the 1989 live version. The 1988 live version resembles the 1989 more it does than the studio version. My music critic friend Phil Freeman tells me that the studio albums of that period have never gotten much love but there is some realization that his live bands were much better. That's certainly the case here.

At the end I have a short video where Sonny Rollins discusses Miles. FWIW, yesterday, May 26, was Miles’ birthday (b. 1926).

Studio recording 1984

In the recording studio, January 26-27, 1984, Record Plant Studio, New York City, New York

Miles Davis: Miles Davis (tpt); John Scofield (g); Robert Irving III (synth); Darryl Jones (el-b); Al Foster (d); Steve Thornton (perc)

I’m sure I heard Cyndi Lauper sing this back in the day – on the radio, sometime, somewhere. But I don’t remember it. I started following along with some sheet music I'd downloaded, to see the form more clearly, and it became apparent that Miles has done some recomposing of the melody. Which is common, and fine. This is not a tune with a lot of melodic action, so parsing it is a little tricky, at least if you don't want to sit there and count out bars, which I didn't.

Opens with Miles on muted trumpet, the rhythm section enters quickly. Melody at about 0:18. Miles, with keys shadowing on chords.

Miles will continue playing in relation to the melody, sometimes hitting, sometimes embellishing, recomposing, and every now and then an excursion away. But he’s always in sight of it.

Repeat 0:45.
Bridge: 1:19.
Back to the beginning: 2:14.
Repeat: 2:55.
Bridge: 3:34.
High note: 4:14. The only one in the recording, establishing a single climax.
Back to beginning: 4:26.
From about 4:42 Miles plays fairly freely.
Strong melodic reference, 5:12.
Another melodic reference, 5:18, fade to end.

This clip superimposes Miles’ performance with Cyndi Lauper’s.

Very revealing.

God, I love modern tech!

Live in Chicago 1989

JVC Jazz Festival, Chicago Theatre, Chicago, Illinois, June 5, 1989

Miles Davis (tp, keyb); Kenny Garrett (as, ss, fl); Joe ‘Foley’ McCreary (lead bg); Adam Holzman (synth); Kei Akagi (synth); Benny Rietveld (el-b); Ricky Wellman (d); Munyungo Jackson (perc)

The overall approach is similar to the studio recording. Miles is almost always playing in relation to the melody; we’re rarely very far from reference to or quote from the melody. But the whole performance is opened up; Miles makes more use of phrases that start high and go low; and he has a dialog with his lead bassist, Foley, before the final open-trumpet push to the end.

Miles starts unaccompanied, noodling on open trumpet, middle and lower register, some half valves, accompaniment comes up softly and slowly.
Bass and drums, 0:36.
High-to-low, 1:03.
Middle-to-low, 1:19-25.
Miles stays silent (while he inserts mute).
Starting high soft and muted, out of nowhere and staying more or less up, noodling, 1:44 – 2:04.
Drops straight down for a short phrase, 2:05.
Miles plays the melody at 2:15.
Still on melody, comes down to bottom, 3:21.
Melody reference, 4:20, 2 tumbling down phrases.
Melody reference, 4:35.
Bridge, 4:55.
An air of finality, 5:24.
Volume picks up, 5:43.
Miles, open trumpet, high, 5:46, working his way down as volume builds.
High ‘squeal’ from Miles, short, then he goes silent, 6:14, and the rhythm section drops way down, they knew it was coming. Note that at this point we’re only a bit longer than the entire studio version.
Muted Miles noodles softly with Foley, on ‘lead’ bass, 6:23.
Miles high & soft, 7:19, continuing with Foley, trading phrases.
Miles has stopped playing, 8:31, the rhythm section starts bringing up the volume.
Miles, open horn, starts in upper middle register, 8:52, goes up, then tumbles down a bit & goes silent. Miles re-enters, 9:20.
Bass drum hits, 9:32.
Miles peals off a quick upper register hit, 9:34.
We’re done. A definitive ending quite different from the studio version.

Munich, Germany 1988

MILES DAVIS – Live in Germany 1988 (Munich Philharmonic Concert Hall)
MILES DAVIS – trumpet,
KENNY GARRETT – saxophone
BOBBY IRVING – keyboards
ADAM HOLZMAN – keyboards
MARILYN MAZUR – percussion

Almost the same band as for Chicago, 1989. The instrumentation is the same except for Mazur and Irving.

Note that “Time After Time” doesn’t start until 0:26. The overall performance is organized in the same way. Notice how he initiates the interchange with Foley (c. 6:49) and how they play facing one another.

Sonny Rollins talks about Miles

Bret Primack, Sonny Rollins and Miles Davis - The Untold Story