Pages in this blog

Monday, March 28, 2022

Seinfeld and AI @ 3QD

My latest post at 3 Quarks Daily:

Analyze This! AI meets Jerry Seinfeld, https://3quarksdaily.com/3quarksdaily/2022/03/analyze-this-ai-meets-jerry-seinfeld.html

I took one of my Seinfeld posts from 2021 and reworked it: Analyze This! Screaming on the flat part of the roller coaster ride [Does GPT-3 get the joke?].

The back-and-forth between the human interrogator, Phil Mohun (in effect, standing in for Seinfeld), and GPT-3 is pretty much the same, but I framed it differently. I like the reframing, especially the ending:

As I already pointed out, GPT-3 doesn’t actually know anything. It was trained by analyzing swathes and acres and oceans of text sucked in off the web. Those texts are just naked word forms and, as we’ve observed, there are no meanings in word forms. GPT-3 is able to compute relationships between word forms as they appear, one after the other, in linguistic strings, and from that produce a simulacrum of sensible language when given a cue.

How could that possibly work? Consider this passage from a famous memorandum (PDF) written by the scientist and mathematician Warren Weaver in 1949 when he was head of the Natural Sciences division of Rockefeller Foundation:

If one examines the words in a book, one at a time as through an opaque mask with a hole in it one word wide, then it is obviously impossible to determine, one at a time, the meaning of the words. "Fast" may mean "rapid"; or it may mean "motionless"; and there is no way of telling which.

But if one lengthens the slit in the opaque mask, until one can see not only the central word in question, but also say N words on either side, then if N is large enough one can unambiguously decide the meaning of the central word. The formal truth of this statement becomes clear when one mentions that the middle word of a whole article or a whole book is unambiguous if one has read the whole article or book, providing of course that the article or book is sufficiently well written to communicate at all.

That’s a start, and only that. But, by extending and elaborating on that, researchers have developed engines that weave elaborate webs of contextual information about relations between word form tokens. It is through such a web – in billions of dimensions! – that GPT-3 has been able to create an uncanny simulacrum of language. Hanging there, poised between chaos and order, the artificial texts tempt us into meaning.

Still, even those with a deep technical understanding of how GPT-3 works – I’m not one of them – don’t understand what’s going on. How could that be? What they understand well (enough) is the process by which GPT-3 builds a language model. That’s called learning mode. What they don’t understand is how the model works once it has been built. That’s called inference mode.

You might think: It’s just computer code. Can’t they “open it up” and take a look? Sure, they can. But there’s a LOT of it and it’s not readily intelligible.

But then, that’s how it is with the human brain and language. We can get glimmerings of what’s going on, but we can’t really observe brain operation directly and in detail. Even if we could, would we be able to make sense of it? The brain has 86 billion neurons and each of them has, on average, 10,000 connections to other neurons. That’s a LOT of detail. Even if we could make a record of what each one is doing, millisecond by millisecond, how would we examine that record and make sense of it?

So, we’ve got two mysterious processes: 1) humans creating and understanding language, 2) GPT-3 operating in inference mode. The second is derivative of and dependent on the first. Beyond that…

No comments:

Post a Comment