Think of this post as an adjunct to my series on A.I., chess, and language, which is about the structure of computation in relation to difficult problems.
NOTE: It runs long, so sit back, relax, pour a Diet Coke, some San Pellegrino, a scotch, light up a spliff (assuming it’s legal where you live) — whatever you do to make online reading tolerable — and settle in for the duration. Or you could just print it out.
I’m interested in the general question of what it would mean to say that an A.I. is more intelligent than the most intelligent human, something like that. That’s an issue that’s being debated extensively these days. For the most part I don’t think the issue is very well formulated.
To be honest, I don’t find it to be a very compelling issue. It doesn’t nag at me. If others weren’t discussing it, I wouldn’t bother.
The notion of intelligence itself remains vague despite all the discussion that it has occasioned. I rather expect that as A.I. becomes more developed, we’ll develop a more sophisticated understanding the issue. The general notion seems like it can be captured in a simple analogy:
Intelligence is to a mind’s capacity for dealing with cognitive tasks, such as finding a cure for cancer
AS
Horsepower is to an engine’s ability deal with mechanical tasks, such as the acceleration of an automobile.
But I don’t want to take up the general issue in this post. Rather, I want to look at analogical reasoning. I start with 1) a specific kind of analogical reasoning, interpreting narratives, 2) use some remarks Geoffrey Hinton made about analogy to introduce some more general remarks, 3) move on to another specific example, an analogy between double-entry bookkeeping and supply and demand, and then 4) conclude by wrapping things up with a quick comparison to computing chess, which can be implemented in a well-defined search space, while searching for and explicating analogies cannot.
Interpreting Texts: Girard and Jaws
A lot of literary interpretation proceeds by constructing analogies between events that happen in texts and patterns of behavior specified in some theory about human behavior. Psychoanalysis has theories about how individuals behave in their personal lives; Marxism has theories about how social class and roles in the economy affect individual action. René Girard has a theory about imitation and sacrifice. That’s what I want to look at.
Early in 2022 I did a Girardian interpretation of Spielberg’s Jaws. At the end of that year I had ChatGPT do the same. Of course ChatGPT can’t watch movies, but Jaws is well-known there’s lots of stuff about it on the web, including scripts, though I don’t know whether or not any of the available scripts give the dialog word-for-word as it is in the film (the scripts I downloaded, for free, did not). Now, I didn’t just ask ChatGPT to use Girard’s ideas to interpret the film. First I prompted it about the film itself, and specifically about Quint. Once I’d established that it had a reasonable grasp of the film I then brought up Girard, generally at first, then specifically about mimetic desire. Then I asked it: “How does mimetic desire work in Jaws?” It gave a reasonable answer:
Mimetic desire is a concept developed by the French philosopher and literary theorist Rene Girard. It refers to the idea that human desire is often based on the desire of others, and it can be seen at work in the film Jaws in a number of ways.
One example of mimetic desire in the film is the way that the townspeople of Amity Island desire the safety and security that the town’s police chief, Martin Brody, is able to provide. When the town is terrorized by a great white shark, the townspeople look to Brody to protect them and to find a way to stop the shark. This desire for protection and security is mimetic, as the townspeople are basing their desire on Brody’s ability to provide it.
Another example of mimetic desire in the film is the way that the shark itself is driven by its own insatiable hunger and desire. The shark is driven to attack and kill the townspeople because it desires their flesh and blood, and it is unable to control its own desire. This mimetic desire ultimately leads to the shark’s own destruction, as it is killed by the town’s police chief in a final confrontation.
Overall, Girard’s concept of mimetic desire can be seen as a useful way of understanding the dynamics at play in the film Jaws, and the ways in which desire can be both destructive and transformative.
The second paragraph is reasonable, but the third is problematic. Hunger is not mimetic on the face of it. But I was happy that it ChatGPT seemed to at least have some idea of that nature of the reasoning required, where to look and what to look for in the film. I decided not to pursue that and turned my attention to sacrifice, first Girard, then the application to Jaws.
The form of the required reasoning is analogical. In effect, is there an analogy between Girard’s general statements about mimetic desire and sacrifice and the specific events in Jaws? I would argue that setting up the question is more difficult than working out an answer to it. I note that it’s not the kind of question that has only one answer; thus the argument I actually made in my paper is more sophisticated than the one ChatGPT came up with through my prompting. What is it that brought me to pose the question in the first place?
I watch a fair amount of streaming video, but I don’t write about most of the titles nor do routinely a watch a particular title with the intention of writing about it. That decision is made later. I had no intention of writing about Jaws when I decided to watch it. I was simply filling a hole in my knowledge of movies – I’d never seen the film, which I knew to be an important one. Once I’d watched the film, I read the Wikipedia article about it, something I routinely do, mostly to ‘calibrate’ my viewing experience. The article noted that the sequels were not as good as the original. I decided to see for myself. I was unable to finish watching that last two sequels (of four), but I watched Jaws 2 at least twice, and the original three or more times. It was obvious that the original was better than the others. I did the multiple viewings in part to figure why the original was better. I was on the prowl, though I hadn’t yet decided to write anything.
I decided there were two reasons the original was best: 1) it was well-organized and tight while the sequel sprawled, and 2) Quint, there was no character in the sequel comparable to Quint. I have no all but decided that I would write about Jaws.
I posed a specific question: Why did Quint die? Oh, I know what happened in the film; that’s not what I was asking. The question was an aesthetic one. As long as the shark was killed the town would be saved. That necessity did not entail the Quint’s death, nor anyone else’s. If Quint hadn’t died, how would the ending have felt? What if it had been Brody or Hooper?
It was while thinking about such questions that it hit me: sacrifice! Girard! How is it that Girard’s ideas came to me? I wasn’t looking for them, not in any direct sense. I was just asking counter-factual questions about the film.
With Girard on my mind I smelled blood. I had a focal point for an article. I started reading articles from various sources, making notes, and corresponding with my friend, David Porush, who knows Girard’s thinking much better than I do. Can I make a nice tight article? That’s what I was trying to figure out. It was only after I’d made some preliminary posts, drafted some text, and run it by David, that I decided to write an article. It turned out well enough that I decided to publish it.
Now, when we’re thinking about whether or not A.I.s will come to exceed our intelligence, are we imagining them going through such a process? For this kind of search and exploration is central to human thinking. I certainly do this sort of exploration when thinking about other things, such as the structure of human cognition, the nature of cultural evolution, the functioning of the nervous system, and so forth. This blog is a 14-year record of my explorations, during which I’ll gather some of them together in a more formal way and write a working paper which I’ll then post at Academia.edu, SSRN and ResearchGate. Every once in a while I’ll write an article which I’ll submit for publication in the formal academic literature – a few of those have gotten published. And then there are the monthly pieces I publish in 3 Quarks Daily, which is quite different from the formal academic literature. And of course I’ve got pages and pages of unpublished notes that support all this activity.
Is this kind of exploratory work part of the routine of the superintelligent A.I., or does it go straight for the good stuff, cranking out fully-realized work without need of exploratory effort? If so, how does it know where to dig for the good stuff? Is that what superintelligence is, knowing where the good stuff is without having to nose around? No one says anything about this. Perhaps they’re thinking about the Star Trek computer. But it knows where to look because Spock points it in the right direction.
This brings us back to Jaws. There is a world of difference between what I did in writing about Jaws and what ChatGPT did. I did the hard part, figuring out that there was a specific intellectual objective there, Jaws and Girard. Once I’d done that there was still work to do, quite a bit of work, but it was of a different kind. I was no longer prospecting for intellectual gold. I was now constructing a system for mining the ore and then refining it into gold. ChatGPT only had to do the last part, dumping the ore into the hopper and cranking out the refined metal. I told it where to look, Jaws, what to look for, Girard’s ideas, and gave it some help turning the crank.
A year later, in January of 2023, I decided to see how ChatGPT would do without all of my prompting. I gave it this prompt:
Stephen Spielberg is an important film-maker. Jaws is one of his most important films because it is generally considered to be the first blockbuster. Rene Girard remains an important thinker. Can you use Girard’s ideas mimetic desire and sacrifice to analyze Jaws?
It didn’t do so well. It needed my prompting to get it through the exercise.
Now, no one is claiming that ChatGPT is superhuman in any respect but its ability to discourse on anything. But GPT-5, who knows, maybe it’ll be superhuman in some interesting way. If not GPT-6, or GPT-7, or maybe we’ll need a more sophisticated architecture, but surely at some point an A.I. will surpass us in the way that we surpass mice. Perhaps so.
But I have no sense that these breezy predictions are supported by thinking about how human intelligence actually goes about solving problems. I does no good to say, but it’s an A.I.; it works differently. Well, maybe yes, maybe no, but there has to be some kind of process. At the moment the human process is the only example we have. Perhaps we should think about it.
Just how is it that Girard popped into my mind in the first place? How do we teach a computer to look around for nothing in particular and come up with something interesting?
Analogy: Kumquats and MiGs
As I remarked above, the process of interpreting Jaws is an analogical one. So let’s think about analogy more generally. I’m thinking in particular of some remarks Geoffrey Hinton made at a panel discussion in October of 2023. You can find the video here. I’ve transcribed some remarks:
1:18:28 – GEOFFREY HINTON: We know that being able to see analogies, especially remote analogies, is a very important aspect of intelligence. So I asked GPT-4, what has a compost heap got in common with an atom bomb? And GPT-4 nailed it, most people just say nothing.
DEMIS HASSABIS: What did it say ...
GEOFFREY HINTON: It started off by saying they're very different energy scales, so on the face of it, they look to be very different. But then it got into chain reactions and how the rate at which they're generating energy increases– their energy increases the rate at which they generate energy. So it got the idea of a chain reaction. And the thing is, it knows about 10,000 times as much as a person, so it's going to be able to see all sorts of analogies that we can't see.
DEMIS HASSABIS: Yeah. So my feeling is on this, and starting with things like AlphaGo and obviously today's systems like Bard and GPT, they're clearly creative in ... New pieces of music, new pieces of poetry, and spotting analogies between things you couldn't spot as a human. And I think these systems can definitely do that. But then there's the third level which I call like invention or out-of-the-box thinking, and that would be the equivalent of AlphaGo inventing Go.
OK. Let’s start from there. Given that GPT-4 “knows about 10,000 times as much as a person,” what procedure will it use “to see all sorts of analogies that we can't see”? I’m thinking of that procedure as roughly analogous to the exploratory process I undertake whenever I decided to watch some video. Every once in a while I decide to write about one of the titles. Most of the time, time, though, what I write isn’t as elaborate as my article about Jaws and Girard – I’ve collected many of those pieces under the rubric of Media Notes, though most of those pieces do not focus on analogical reasoning.
What’s the procedure by which an GPT-4 would search through all those things it knows and come up with the interesting analogies? There isn’t one and I suppose it’s a bit churlish of me to suggest that Hinton should specify one. But really, if there he has no procedure to suggest, then what’s he talking about? We know how chess programs search the chess tree. How do we search through concept space for analogies? Alas, while the chess tree is a well-defined formal object, the same cannot be said of concept space, which is little more than a phrase in search of and explication. And how do we evaluate possible analogy-pairs?
Perhaps the simplest procedure is simply to ask. That’s something I recently tried. Here’s the prompt I gave to ChatGPT: