Saturday, November 25, 2023

On possible cross-fertilization between AI and neuroscience [Creativity]

MIT Center for Minds, Brains, and Machines (CBMM), a panel discussion: CBMM10 - A Symposium on Intelligence: Brains, Minds, and Machines.

On which critical problems should Neuroscience, Cognitive Science, and Computer Science focus now? Do we need to understand fundamental principles of learning -- in the sense of theoretical understanding like in physics -- and apply this understanding to real natural and artificial systems? Similar questions concern neuroscience and human intelligence from the society, industry and science point of view.

Panel Chair: T. Poggio
Panelists: D. Hassabis, G. Hinton, P. Perona, D. Siegel, I. Sutskever

Quick Comments

1.) I’m a bit annoyed that Hassabis is giving neuroscience credit for the idea of episodic memory. As far as I know, the term was coined by a cognitive psychologist named Endel Tulving in the early 1970s, who stood it in opposition to semantic memory. That distinction was all over the place in the cognitive sciences in the 1970s and its second nature to me. When ChatGPT places a number of events in order to make a story, that’s episodic memory.

2.) Rather than theory, I like to think of what I call speculative engineering. I coined the phrase in the preface to my book about music (Beethoven’s Anvil), where I said:

Engineering is about design and construction: How does the nervous system design and construct music? It is speculative because it must be. The purpose of speculation is to clarify thought. If the speculation itself is clear and well-founded, it will achieve its end even when it is wrong, and many of my speculations must surely be wrong. If I then ask you to consider them, not knowing how to separate the prescient speculations from the mistaken ones, it is because I am confident that we have the means to sort these matters out empirically. My aim is to produce ideas interesting, significant, and clear enough to justify the hard work of investigation, both through empirical studies and through computer simulation.

3.) On Chomsky (Hinton & Hassabis): Yes, Chomsky is fundamentally wrong about language. Language is primarily a tool for conveying meaning from one person to another and only derivatively a tool for thinking. And he’s wrong that LLMs can learn any language and therefore they are useless for the scientific study of language. Another problem with Chomsky’s thinking is that he has no interest in process, which is in the realm of performance, not competence.

Let us assume for the sake of argument that the introduction of a single token into the output stream requires one primitive operation of the virtual system being emulated by an LLM. By that I mean that there is no logical operation within the process, no AND or OR, no shift of control; all that’s happening is one gigantic calculation involving all the parameters in the system. That means that the number of primitive operations required to produce a given output is equal to the number of tokens in that output. I suggest that that places severe constraints on the organization of the LLM’s associative memory.

Contrast that with what happens in a classical symbolic system. Let us posit that each time a word (not quite the same as a token in an LLM, but the difference is of no consequence) is emitted, that itself requires a single primitive operation in the classical system. Beyond that, however, a classical system has to execute numerous symbolic operations in order to arrive at each word. Regardless of just how those operations resolve into primitive symbolic operations, the number has to be larger, perhaps considerably larger, than the number of primitive operations an LLM requires. I suggest that this process places fewer constraints on the organization of a symbolic memory system.

At this point I’ve reached 45:11 in the video, but I have to stop and think. Perhaps I’ll offer some more comments later.

LATER: Creativity

4.) Near the end (01:20:00 or so) the question of creativity comes up. Hassibis says AIs aren't there yet. Hinton brings up analogy, pointing out that, with all the vast knowledge LLMs have ingested, they're got opportunities for coming up with analogy after analogy after analogy. I've got experience with ChatGPT that's directly relevant to those issues, analogy and creativity.

One of the first things I did once I started playing with ChatGPT was have it undertake a Girardian interpretation of Steven Spielberg's Jaws. To do that it has to determine whether or not there is an analogy between events in the film and the phenomena that Girard theorizes about. It did that fairly well. So I wrote that up and published it in 3 Quarks Daily, Conversing with ChatGPT about Jaws, Mimetic Desire, and Sacrifice. Near the end I remarked:

I was impressed with ChatGPT’s capabilities. Interacting with it was fun, so much fun that at times I was giggling and laughing out loud. But whether or not this is a harbinger of the much-touted Artificial General Intelligence (AGI), much less a warning of impending doom at the hands of an All-Knowing, All-Powerful Superintelligence – are you kidding? Nothing like that, nothing at all. A useful assistant for a variety of tasks, I can see that, and relatively soon. Maybe even a bit more than an assistant. But that’s as far as I can see.

We can compare what ChatGPT did in response to my prompting with what I did unprompted, freely and of my own volition. There’s nothing its replies that approaches my article, Shark City Sacrifice, nor the various blog posts I wrote about the film. That’s important. I was neither expecting, much less hoping, that ChatGPT would act like a full-on AGI. No, I have something else in mind.

What’s got my attention is what I had to do to write the article. In the first place I had to watch the film and make sense of it. As I’ve already indicated, have no artificial system with the required capabilities, visual, auditory, and cognitive. I watched the film several times in order to be sure of the details. I also consulted scripts I found on the internet. I also watched Jaws 2 more than once. Why did I do that? There’s curiosity and general principle. But there’s also the fact that the Wikipedia article for Jaws asserted that none of the three sequels were as good as the original. I had to watch the others to see for myself – though I was unable to finish watching either of that last two.

At this point I was on the prowl, though I hadn’t yet decided to write anything.

I now asked myself why the original was so much better than the first sequel, which was at least watchable. I came up with two things: 1) the original film was well-organized and tight while the sequel sprawled, and 2) Quint, there was no character in the sequel comparable to Quint.

Why did Quint die? Oh, I know what happened in the film; that’s not what I was asking. The question was an aesthetic one. As long as the shark was killed the town would be saved. That necessity did not entail the Quint’s death, nor anyone else’s. If Quint hadn’t died, how would the ending have felt? What if it had been Brody or Hooper?

It was while thinking about such questions that it hit me: sacrifice! Girard! How is it that Girard’s ideas came to me. I wasn’t looking for them, not in any direct sense. I was just asking counter-factual questions about the film.

Whatever.

Once Girard was on my mind I smelled blood, that is, the possibility of writing an interesting article. I started reading, making notes, and corresponding with my friend, David Porush, who knows Girard’s thinking much better than I do. Can I make a nice tight article? That’s what I was trying to figure out. I was only after I’d made some preliminary posts, drafted some text, and run it by David that I decided to go for it. The article turned out well enough that I decided to publish it. And so I did.

It’s one thing to figure out whether or not such and such a text/film exhibits such and such pattern when you are given the text and the pattern. That’s what ChatGPT did. Since I had already made the connection between Girard and Jaws it didn’t have to do that. I was just prompting ChatGPT to verify the connection, which it did (albeit in a weak way). That’s the kind of task we set for high school students and lower division college students. […]

I don’t really think that ChatGPT is operating at a high school level in this context. Nor do I NOT think that. I don’t know quite what to think. And I’m happy with that.

The deeper point is that there is a world of difference between what ChatGPT was doing when I piloted it into Jaws and Girard and what I eventually did when I watched Jaws and decided to look around to see what I could see. How is it that, in that process, Girard came to me? I wasn’t looking for Girard. I wasn’t looking for anything in particular. How do we teach a computer to look around for nothing in particular and come up with something interesting?

These observations are informal and are only about a single example. Given those limitations it's difficult to imagine a generalization. But I didn't hear anything from those experts that was comparably rich.

Hinton gave an example of an analogy that he posed to GPT-4 (01:18:30): “What has a compost heap got in common with an atom bomb?” It got the answer he was looking for, chain reaction, albeit at different energy levels and different rates. That's interesting. Why wasn't the panel ready with 20 such examples among them? Perhaps more to the point, doesn't Hinton see that it is one thing for GPT-4 to explain an analogy he presents to it, but that coming up with the analogy in the first place is a different kind of mental process?

Do they not have more such examples from their own work? Don't they think about their own work process, all the starts and stops, the wandering around, the dead ends and false starts, the open-ended exploration, that came before final success. And even then, no success is final, but only provisional pending further investigation. Can they not see the difference between what they do and what their machines do? Do they think all the need for exploration will just vanish in the face of machine superintelligence. Do they really believe that the universe is that small?

STILL LATER: Hinton and Hassabis on analogies

Hinton continues with analogies and Hassabis weights in:

1:18:28 – GEOFFREY HINTON: We know that being able to see analogies, especially remote analogies, is a very important aspect of intelligence. So I asked GPT-4, what has a compost heap got in common with an atom bomb? And GPT-4 nailed it, most people just say nothing. DEMIS HASSABIS: What did it say ...

1:19:09 – And the thing is, it knows about 10000 times as much as a person, so it's going to be able to see all sorts of analogies that we can't see. DEMIS HASSABIS: Yeah. So my feeling is on this, and starting with things like AlphaGo and obviously today's systems like Bard and GPT, they're clearly creative in ...

1:20:18 – New pieces of music, new pieces of poetry, and spotting analogies between things you couldn't spot as a human. And I think these systems can definitely do that. But then there's the third level which I call like invention or out-of-the-box thinking, and that would be the equivalent of AlphaGo inventing Go.

Well, yeah, sure, GPT-4 has all this stuff in its model, way more topics than any one human. But where’s GPT-4 going to “stand” so it can “look over” all that stuff and spot the analogies? That requires some kind of procedure. What is it?

For example, it might partition all that knowledge into discrete bits and then set up a 2D matrix with a column and a row for each discrete chunk of knowledge. Then it can move systematically through the matrix, checking each cell to see whether or not the pair in that cell is a useful analogy. What kind of tests does it apply to make that determination? I can imagine there might be a test or tests that allows a quick and dirty rejection for many candidates. But those that remain, what can you do but see if any useful knowledge follows from trying out the analogy. How long will that determination take? And so forth.

That’s absurd on the face of it. What else is there? I just explained what I went through to come up with an analogy between Jaws and Girard. But that’s just my behavior, not the mental process that’s behind the behavior. I have no trouble imagining that, in principle, having these machines will help speed up the process, but in the end I think we’re going to end up with a community of human investigators communicating with one another while they make sense of the world. The idea, which, judging from remarks he’s made elsewhere, Hinton seems to hold, that one of these days we’ll have a machine that takes humans out of the process all together, that’s an idle fantasy.

No comments:

Post a Comment