Sunday, May 5, 2024

Intelligence, A.I. and analogy: Jaws & Girard, kumquats & MiGs, double-entry bookkeeping & supply and demand

Think of this post as an adjunct to my series on A.I., chess, and language, which is about the structure of computation in relation to difficult problems.

NOTE: It runs long, so sit back, relax, pour a Diet Coke, some San Pellegrino, a scotch, light up a spliff (assuming it’s legal where you live) — whatever you do to make online reading tolerable — and settle in for the duration. Or you could just print it out.

I’m interested in the general question of what it would mean to say that an A.I. is more intelligent than the most intelligent human, something like that. That’s an issue that’s being debated extensively these days. For the most part I don’t think the issue is very well formulated.

To be honest, I don’t find it to be a very compelling issue. It doesn’t nag at me. If others weren’t discussing it, I wouldn’t bother.

The notion of intelligence itself remains vague despite all the discussion that it has occasioned. I rather expect that as A.I. becomes more developed, we’ll develop a more sophisticated understanding the issue. The general notion seems like it can be captured in a simple analogy:

Intelligence is to a mind’s capacity for dealing with cognitive tasks, such as finding a cure for cancer

AS

Horsepower is to an engine’s ability deal with mechanical tasks, such as the acceleration of an automobile.

But I don’t want to take up the general issue in this post. Rather, I want to look at analogical reasoning. I start with 1) a specific kind of analogical reasoning, interpreting narratives, 2) use some remarks Geoffrey Hinton made about analogy to introduce some more general remarks, 3) move on to another specific example, an analogy between double-entry bookkeeping and supply and demand, and then 4) conclude by wrapping things up with a quick comparison to computing chess, which can be implemented in a well-defined search space, while searching for and explicating analogies cannot.

Interpreting Texts: Girard and Jaws

A lot of literary interpretation proceeds by constructing analogies between events that happen in texts and patterns of behavior specified in some theory about human behavior. Psychoanalysis has theories about how individuals behave in their personal lives; Marxism has theories about how social class and roles in the economy affect individual action. RenĂ© Girard has a theory about imitation and sacrifice. That’s what I want to look at.

Early in 2022 I did a Girardian interpretation of Spielberg’s Jaws. At the end of that year I had ChatGPT do the same. Of course ChatGPT can’t watch movies, but Jaws is well-known there’s lots of stuff about it on the web, including scripts, though I don’t know whether or not any of the available scripts give the dialog word-for-word as it is in the film (the scripts I downloaded, for free, did not). Now, I didn’t just ask ChatGPT to use Girard’s ideas to interpret the film. First I prompted it about the film itself, and specifically about Quint. Once I’d established that it had a reasonable grasp of the film I then brought up Girard, generally at first, then specifically about mimetic desire. Then I asked it: “How does mimetic desire work in Jaws?” It gave a reasonable answer: 

Mimetic desire is a concept developed by the French philosopher and literary theorist Rene Girard. It refers to the idea that human desire is often based on the desire of others, and it can be seen at work in the film Jaws in a number of ways.

One example of mimetic desire in the film is the way that the townspeople of Amity Island desire the safety and security that the town’s police chief, Martin Brody, is able to provide. When the town is terrorized by a great white shark, the townspeople look to Brody to protect them and to find a way to stop the shark. This desire for protection and security is mimetic, as the townspeople are basing their desire on Brody’s ability to provide it.

Another example of mimetic desire in the film is the way that the shark itself is driven by its own insatiable hunger and desire. The shark is driven to attack and kill the townspeople because it desires their flesh and blood, and it is unable to control its own desire. This mimetic desire ultimately leads to the shark’s own destruction, as it is killed by the town’s police chief in a final confrontation.

Overall, Girard’s concept of mimetic desire can be seen as a useful way of understanding the dynamics at play in the film Jaws, and the ways in which desire can be both destructive and transformative.

The second paragraph is reasonable, but the third is problematic. Hunger is not mimetic on the face of it. But I was happy that it ChatGPT seemed to at least have some idea of that nature of the reasoning required, where to look and what to look for in the film. I decided not to pursue that and turned my attention to sacrifice, first Girard, then the application to Jaws.

The form of the required reasoning is analogical. In effect, is there an analogy between Girard’s general statements about mimetic desire and sacrifice and the specific events in Jaws? I would argue that setting up the question is more difficult than working out an answer to it. I note that it’s not the kind of question that has only one answer; thus the argument I actually made in my paper is more sophisticated than the one ChatGPT came up with through my prompting. What is it that brought me to pose the question in the first place?

I watch a fair amount of streaming video, but I don’t write about most of the titles nor do routinely a watch a particular title with the intention of writing about it. That decision is made later. I had no intention of writing about Jaws when I decided to watch it. I was simply filling a hole in my knowledge of movies – I’d never seen the film, which I knew to be an important one. Once I’d watched the film, I read the Wikipedia article about it, something I routinely do, mostly to ‘calibrate’ my viewing experience. The article noted that the sequels were not as good as the original. I decided to see for myself. I was unable to finish watching that last two sequels (of four), but I watched Jaws 2 at least twice, and the original three or more times. It was obvious that the original was better than the others. I did the multiple viewings in part to figure why the original was better. I was on the prowl, though I hadn’t yet decided to write anything.

I decided there were two reasons the original was best: 1) it was well-organized and tight while the sequel sprawled, and 2) Quint, there was no character in the sequel comparable to Quint. I have no all but decided that I would write about Jaws.

I posed a specific question: Why did Quint die? Oh, I know what happened in the film; that’s not what I was asking. The question was an aesthetic one. As long as the shark was killed the town would be saved. That necessity did not entail the Quint’s death, nor anyone else’s. If Quint hadn’t died, how would the ending have felt? What if it had been Brody or Hooper?

It was while thinking about such questions that it hit me: sacrifice! Girard! How is it that Girard’s ideas came to me? I wasn’t looking for them, not in any direct sense. I was just asking counter-factual questions about the film.

With Girard on my mind I smelled blood. I had a focal point for an article. I started reading articles from various sources, making notes, and corresponding with my friend, David Porush, who knows Girard’s thinking much better than I do. Can I make a nice tight article? That’s what I was trying to figure out. It was only after I’d made some preliminary posts, drafted some text, and run it by David, that I decided to write an article. It turned out well enough that I decided to publish it.

Now, when we’re thinking about whether or not A.I.s will come to exceed our intelligence, are we imagining them going through such a process? For this kind of search and exploration is central to human thinking. I certainly do this sort of exploration when thinking about other things, such as the structure of human cognition, the nature of cultural evolution, the functioning of the nervous system, and so forth. This blog is a 14-year record of my explorations, during which I’ll gather some of them together in a more formal way and write a working paper which I’ll then post at Academia.edu, SSRN and ResearchGate. Every once in a while I’ll write an article which I’ll submit for publication in the formal academic literature – a few of those have gotten published. And then there are the monthly pieces I publish in 3 Quarks Daily, which is quite different from the formal academic literature. And of course I’ve got pages and pages of unpublished notes that support all this activity.

Is this kind of exploratory work part of the routine of the superintelligent A.I., or does it go straight for the good stuff, cranking out fully-realized work without need of exploratory effort? If so, how does it know where to dig for the good stuff? Is that what superintelligence is, knowing where the good stuff is without having to nose around? No one says anything about this. Perhaps they’re thinking about the Star Trek computer. But it knows where to look because Spock points it in the right direction.

This brings us back to Jaws. There is a world of difference between what I did in writing about Jaws and what ChatGPT did. I did the hard part, figuring out that there was a specific intellectual objective there, Jaws and Girard. Once I’d done that there was still work to do, quite a bit of work, but it was of a different kind. I was no longer prospecting for intellectual gold. I was now constructing a system for mining the ore and then refining it into gold. ChatGPT only had to do the last part, dumping the ore into the hopper and cranking out the refined metal. I told it where to look, Jaws, what to look for, Girard’s ideas, and gave it some help turning the crank.

A year later, in January of 2023, I decided to see how ChatGPT would do without all of my prompting. I gave it this prompt:

Stephen Spielberg is an important film-maker. Jaws is one of his most important films because it is generally considered to be the first blockbuster. Rene Girard remains an important thinker. Can you use Girard’s ideas mimetic desire and sacrifice to analyze Jaws?

It didn’t do so well. It needed my prompting to get it through the exercise.

Now, no one is claiming that ChatGPT is superhuman in any respect but its ability to discourse on anything. But GPT-5, who knows, maybe it’ll be superhuman in some interesting way. If not GPT-6, or GPT-7, or maybe we’ll need a more sophisticated architecture, but surely at some point an A.I. will surpass us in the way that we surpass mice. Perhaps so.

But I have no sense that these breezy predictions are supported by thinking about how human intelligence actually goes about solving problems. I does no good to say, but it’s an A.I.; it works differently. Well, maybe yes, maybe no, but there has to be some kind of process. At the moment the human process is the only example we have. Perhaps we should think about it.

Just how is it that Girard popped into my mind in the first place? How do we teach a computer to look around for nothing in particular and come up with something interesting?

Analogy: Kumquats and MiGs

As I remarked above, the process of interpreting Jaws is an analogical one. So let’s think about analogy more generally. I’m thinking in particular of some remarks Geoffrey Hinton made at a panel discussion in October of 2023. You can find the video here. I’ve transcribed some remarks:

1:18:28 – GEOFFREY HINTON: We know that being able to see analogies, especially remote analogies, is a very important aspect of intelligence. So I asked GPT-4, what has a compost heap got in common with an atom bomb? And GPT-4 nailed it, most people just say nothing.

DEMIS HASSABIS: What did it say ...

GEOFFREY HINTON: It started off by saying they're very different energy scales, so on the face of it, they look to be very different. But then it got into chain reactions and how the rate at which they're generating energy increases– their energy increases the rate at which they generate energy. So it got the idea of a chain reaction. And the thing is, it knows about 10,000 times as much as a person, so it's going to be able to see all sorts of analogies that we can't see.

DEMIS HASSABIS: Yeah. So my feeling is on this, and starting with things like AlphaGo and obviously today's systems like Bard and GPT, they're clearly creative in ... New pieces of music, new pieces of poetry, and spotting analogies between things you couldn't spot as a human. And I think these systems can definitely do that. But then there's the third level which I call like invention or out-of-the-box thinking, and that would be the equivalent of AlphaGo inventing Go.

OK. Let’s start from there. Given that GPT-4 “knows about 10,000 times as much as a person,” what procedure will it use “to see all sorts of analogies that we can't see”? I’m thinking of that procedure as roughly analogous to the exploratory process I undertake whenever I decided to watch some video. Every once in a while I decide to write about one of the titles. Most of the time, time, though, what I write isn’t as elaborate as my article about Jaws and Girard – I’ve collected many of those pieces under the rubric of Media Notes, though most of those pieces do not focus on analogical reasoning.

What’s the procedure by which an GPT-4 would search through all those things it knows and come up with the interesting analogies? There isn’t one and I suppose it’s a bit churlish of me to suggest that Hinton should specify one. But really, if there he has no procedure to suggest, then what’s he talking about? We know how chess programs search the chess tree. How do we search through concept space for analogies? Alas, while the chess tree is a well-defined formal object, the same cannot be said of concept space, which is little more than a phrase in search of and explication. And how do we evaluate possible analogy-pairs?

Perhaps the simplest procedure is simply to ask. That’s something I recently tried. Here’s the prompt I gave to ChatGPT:

I am going to present you with a bunch of analogies and I want you to explain them. Some of them may same more plausible than others, but don’t let that bother. Just find analogous features.

Here’s the first one: A kitten and a bicycle.

Certainly! Let's explore the analogy between a kitten and a bicycle:

  1. Learning Curve:
    • Kitten: Like learning to ride a bicycle, a kitten goes through a learning curve as it develops coordination and balance skills. It starts by stumbling and gradually becomes more adept at moving around.
    • Bicycle: When someone first learns to ride a bicycle, there is a learning curve in terms of balance and coordination. Over time, the rider becomes more skilled and comfortable.
  2. Independence:
    • Kitten: As a kitten grows, it becomes more independent, learning to explore its surroundings and perform activities on its own.
    • Bicycle: Riding a bicycle grants a sense of independence, allowing the rider to travel to different places under their control.
  3. Care and Maintenance:
    • Kitten: Similar to a bicycle requiring maintenance, a kitten needs care, including feeding, grooming, and veterinary attention.
    • Bicycle: Regular maintenance is essential for a bicycle to function smoothly. This involves checking and adjusting various components, such as the tires, brakes, and gears.
  4. Playfulness:
    • Kitten: Kittens are known for their playful nature, exploring their environment and engaging in playful activities.
    • Bicycle: Riding a bicycle can be a form of recreation and play, especially for leisure rides or sports.
  5. Adaptability:
    • Kitten: Kittens can adapt to various environments and situations, showcasing their flexibility.
    • Bicycle: Bicycles are versatile and can be adapted for different terrains or purposes, such as road cycling, mountain biking, or commuting.
While a kitten and a bicycle may seem quite different at first glance, there are interesting similarities when considering aspects like learning, independence, care, playfulness, and adaptability.

Now, you and I might think that that particular analogy is absurd on the face of it. Nonetheless ChatGPT had no trouble justifying it. Is that an indication of superior intelligence or of something else?

Here are the other analogies I asked it to explicate:

  • A bicycle and a food blender.
  • A kitten and a clock.
  • A Jack-in-the-box and the moon.
  • A clock and a sea anemone.
  • A garbage heap and a submarine.
  • A submarine and a gourmet Chinese banquet.
  • A helicopter and a Beethoven piano sonata.
  • A novel by Tolstoy and a MiG fighter jet.
  • A three-dimensional matrix and the Christian Trinity.

It had no difficulty with any of them.

Here’s how I ended the exercise:

Which of those various analogies made the most sense?

The assessment of which analogy makes the most sense is subjective and depends on individual perspectives and preferences. Each analogy draws connections between seemingly unrelated concepts, highlighting similarities in various aspects. The significance of an analogy may vary based on personal interests, experiences, and understanding of the involved concepts.

If you have a specific criterion or context in mind for determining the "most sense," please provide more details, and I can offer a more tailored response. Otherwise, the value of each analogy lies in its ability to illuminate interesting parallels between diverse concepts.

Which of those analogies is most like a kumquat?

Among the analogies discussed, none directly relates to a kumquat, as the analogies primarily involve diverse concepts such as animals, objects, and abstract ideas. However, if we consider the characteristics of a kumquat, such as its small citrus fruit nature, one could draw a loose analogy with the MiG fighter jet. This analogy is based on the idea that both the kumquat and the MiG fighter jet possess compact and concentrated features within their respective domains – one being a small fruit with intense citrus flavor and the other being a nimble and agile military aircraft. Keep in mind that this analogy is more creative and symbolic rather than being a direct and precise comparison.

It’s not clear to me just what conclusion is to be drawn here.

Perhaps we have GPT-4 start cranking out lists of concept-pairs that are to be tested for analogical fruitfulness. We then hand those lists to gig workers who then start feeding them into GPT-4 as I’ve prompted ChatGPT above. They then rate each analogy explication for intellectual plausibility and those ratings are fed back to GPT-4 for another round of fine-tuning. But how are those gig workers to assess the intellectual plausibility of GPT-4’s output? Someone with a taste for comedy is likely to provide very different ratings from someone interested in physics. In any event the number of available concept-pairs is so large as to render the exercise pointless. I suppose we could speed things up a bit if we had our army of gig workers pre-select candidate pairs from the lists GPT-4 generates so that most candidate pairs are never presented for actual analogical explication. What criteria will they apply to make the selection? Even then, though, the number of candidate pairs is so huge...

It seems to that that what we really want is for GPT-4 to weed out the implausible candidates, explicate only the plausible ones, and then....and then do what? Output the plausible ones for human consideration? Pick the most interesting explications and really investigate them, whatever that means? Again, what are the criteria of judging candidate plausibility and explication interestingness? Perhaps the idea is that GPT-X will figure it out and use the most plausible explications in the next round of recursive self-improvement.

If you buy that, might I offer you some shares in the Brooklyn Bridge, or perhaps Florida swampland? I’ll give you a good deal.

From Double-entry bookkeeping to supply and demand

Now let’s consider a real example of analogy, between double-entry book-keeping and the economic analysis of supply and demand. Let’s start on the economics side, with the history of economics. Back in October of 2023 Tyler Cowen had published a book about the history of economics which he organized as an attempt to determine who was the greatest economist of all time. On December 10, 2023, Cowen participated in a podcast about that book. In the course of the discussion Cowen mentioned that early economics was not very good:

1:29:20: look at 17th century economics it's pretty piss poor – I mean it's barely anything – so the 17th century Salamancans have some basic pieces – the early mercantilists have some basic pieces – but it's still fairly primitive.

there's something about economics that is more counterintuitive than we moderns realize I think – that's my conclusion – but I still find this a big puzzle

As I observed in a blog post I wrote at the time – I’m quoting and paraphrasing from that post here and there in the following paragraphs – obviously those early thinkers don't have the necessary conceptual equipment. Ideas can be constructed over other ideas, almost layer by layer. There is an inner architecture to thought. We may not understand that architecture at all well, but it is real nonetheless. Cowen was talking about writing a paper about the conceptualization of pricing. The 17th century writing on the issue was not very good. For example, they don't have a conception of supply and demand.

That's when rank 3 thought was emerging, to invoke the scheme David Hays and I wrote about in The Evolution of Cognition. There we argue that the early modern conceptual revolution was precipitated by the assimilation of Arabic arithmetic, leading to algorithms (derived from the name of the Arabic thinker, al-Khowarizmi) and effective calculation. It's downstream of that that we get conceptual foundations capable of supporting economic thought. That's where I'd look for the solution of Cowen's problem.

Back in 2018 I’d written a post entitled, Innovation, stagnation, and the construction of ideas and conceptual systems. In that post I had quoted a paragraph of from the Wikipedia entry on merchantilism:

Mercantilism developed at a time of transition for the European economy. Isolated feudal estates were being replaced by centralized nation-states as the focus of power. Technological changes in shipping and the growth of urban centres led to a rapid increase in international trade. Mercantilism focused on how this trade could best aid the states. Another important change was the introduction of double-entry bookkeeping and modern accounting. This accounting made extremely clear the inflow and outflow of trade, contributing to the close scrutiny given to the balance of trade. Of course, the impact of the discovery of America cannot be ignored... New markets and new mines propelled foreign trade to previously inconceivable volumes, resulting in “the great upward movement in prices” and an increase in “the volume of merchant activity itself”.

Now, I did not in fact think about that blog entry when I was considering Cowen’s problem about conceptualizing the mechanism of supply and demand. I didn’t have to. Conceptual development is one of my central intellectual concerns. I think about it a lot. Once I’d read that Wikipedia article and written that post, a connection between double-entry bookkeeping and 17th century Europe had become part of my mental model of the world.

Thus it is quite natural that, the day after I’d made my post about that December 10th podcast I decided to query ChatGPT about related matters: Supply and Demand, Equilibrium & Calculus, Exponential Growth, Marginal Revolution, and Double-entry book-keeping. And so I came to formulate a question: What does double-entry bookkeeping have in common with supply and demand? Double-entry bookkeeping dates back to the late 15th century and is a practical discipline. What becomes visible by ‘going meta’ over that practice? 

Double-entry bookkeeping is a system for maintaining closure over a set of transactions that grow over time as transactions are added to the list. We've got closure and change over time. Supply and demand change over time. Where's the closure, the dynamic closure? How do supply and demand complement one another in a say similar to the balance of debit and credit accounts in double-entry bookkeeping? How was Adam Smith able to get “on top” of that in the way that previous thinkers could not? How does the famous metaphor of the invisible hand do its work?

As a last refinement, I asked ChatGPT to explicate an analogy between double-entry bookkeeping and supply and demand – just as I would later ask it to explicate an analogy between, e.g. a Beethoven piano sonata and a helicopter. It did an excellent job. I’ve appended that explication at the end of this essay.

I should say, however, that I do not regard the issue as being settled. There’s a proposal on the table, that thinking about double-entry bookkeeping led to initial conceptualizations of supply and demand. Cowen seems to have found the suggestion plausible, as he made a link to my post on his Marginal Revolutions blog. But there’s still a bit of work to be done to strengthen and perhaps ultimately confirm that proposal; I suspect that Cowen would agree. I think that confirmation will ultimately require (perhaps considerable) work on the cognitive underpinnings of cultural evolution and the history of ideas. I have little sense of where Cowen stands on those broad issues beyond having a general interest in them.

I have no trouble imagining how the appropriate A.I. technology could help with that process. It needs to have access to a relevant set of machine-readable documents spanning the period from, say, the emergence of double-entry bookkeeping in Europe in the 14th century through the late 18th century, when James Denham-Steuart used the phrase in his Inquiry into the Principles of Political Economy. One would then, in the manner of Spock on the bridge of the Enterprise, direct the A.I. to trace all paths from one document to another leading from double-entry bookkeeping (early in the corpus) through the document collection to supply and demand (late in the corpus). Have it present you with the key passages, translated into English (or whatever language you wish), along with links back and forth through the collection. You can then read those passages and make further queries as necessary. 

I suspect that we might be able to do that with current technology. I think LLMs would able to do the pattern-matching required to trace those paths through the document collection, though they would likely need to be fitted out with a control structure to keep them on track, which is the sort of symbol processing control structure that Gary Marcus keeps talking about. Beyond that, I don’t know whether or not the relevant documents are available in machine-readable form. If not, that’s a doable task. Getting it all integrated into a well-operating system, that will require work as well, not to mention a considerable leaven of cross-border institutional cooperation. 

Moreover, if we are going to do it for the history of economics, why not for everything? Such computational facilities would allow us to revolutionize our understanding of the past. And, who knows, such understanding might have practical utility as we advance into the future.

I digress.

Stepping back, as I pointed out in my first example, Jaws and Girard, it is one thing for an A.I. to explain an analogy one gives to it, like Spock asking the Enterprise computer to solve a problem. It is something else entirely for an A.I. to come up a worthwhile analogy in the first place. Think about the process that led to me gathering double-entry bookkeeping and supply and demand into the same context.

On the one hand we have Tyler Cowen, an economist with a particular interest in the history of economics. And then we have me, with a particular interest in how concepts are constructed and how they evolve in culture. I’ve been following Cowen’s ideas for some time now, and we’ve corresponded a bit, so he knows something about my thinking as well. Given my interest in his work, and the fact that I had been looking at his book on great economists, it was natural for me to look at that podcast. But if Cowen had not happened to make some remarks about the poor quality of 17th century economic thinking, in particular, about supply and demand, it might never have occurred to me to think about a possible relationship between that phenomenon and bookkeeping.

That conjunction is thus the result of quasi-serendipitous interaction between two thinkers with very different interests, with very different intellectual histories and a different range of colleagues and contacts. It’s not simply that we know different things and use different methods, but – here’s the crucial point – that we are more or less epistemologically independent. My judgment doesn’t depend on his, and his doesn’t depend on mine.

Such independence is central to the process by which humans produce knowledge. That’s what scientific replication is about, independent judgment of observations. Unfortunately, yes, we have to contend with intellectual fraud. But that’s at best a secondary reason for replicating results. The primary purpose is to bring independent judgment to bear. In areas where looser methods preclude replication, like the humanities and qualitative psychology and sociology, it is still important to have independent investigators studying the same phenomena.

How is that going to work out in the world of A.I.? We don’t know. It’s easy enough to clone a model. But in what sense are those model’s independent? They are based on exactly the same history. So, while they may be physically separate, functionally, they are the same. However, since the operation of LLMs during inference is not completely deterministic, something like epistemological independence may exist between them. But how is that different from presenting a problem to the same LLM in two or more different sessions? And then we have the propensity of LLMs to confabulate.

Of course, LLMs are only the A.I. du jour. We can’t draw general conclusions about the future of A.I.’s from remarks based on this particular architecture. At this point, however, it seems reasonable bring this particular case to a close and look back over the whole discussion.

Can we evaluate the intelligence of its mind without also explicitly thinking about the structure of the environment in which it operates?

There is a general notion about intelligence that’s been floating around in these discussions for some time:

Intelligence is the ability to use information to attain a goal in an environment.

I believe that idea has been attributed to John McCarthy, but that’s a secondary matter. I want to use it as a way of looking over the previous discussion.

I want to set that up by considering the special case of chess. As I pointed out in this piece, AI, Chess, and Language 1: Two VERY Different Beasts, chess and language present us with very different computational problems. I went on to discuss chess in this piece, AI, Chess, and Language 3.1: Search! What enables us to entertain the idea that chess is a paradigmatic case of cultural evolution?, and language in this one, AI, Chess, and Language 3.2: Search! The case of language. As an environment for the exercise of intelligence, chess is well-behaved and well-understood. Language is neither of those things. Finding useful analogies and explicating them, that’s a problem in the language environment.

Consider chess. The game is defined in terms of simple rules governing the movement of six kinds of pieces (pawn, rook, bishop, knight, queen, king) on an eight-by-eight grid. The universe of all possible chess games (given an appropriate convention for ending play in the case of a draw) takes the form of a tree. To be sure, the chess tree is much too large to be fully laid out in even the largest computer. But we do know quite a bit about searching through that tree.

As far as I can tell, all we know about the space of possible analogies is that it is extremely large – probably larger, even much larger, than the chess tree, but who knows? – and we haven’t got the foggiest idea about how to structure it. Thus there is a great deal of serendipity in the two specific examples I examined, Jaws and Girard on the one hand, bookkeeping and economics on the other. In the latter case I at least have a story to tell about how my recent work put the idea of double-entry bookkeeping in my mind so that I could apply it to a problem Tyler Cowen posed. But the fact that Tyler posed a relevant problem in a place where I’d happen upon it, that was serendipitous. In the case of Jaws and Girard, 1) I’ve known about Girard for decades but had never done any serious work involving his ideas and think about him only occasionally, 2) I decided it was time I watched that film and then, out of the blue 3) WHAM! it hit me, Girard has something to say about the film.

It is a straight forward matter to construct an explicit model of the entire chess environment in a computer and then figure out how to navigate in it. How do we do that for analogy world? To be sure, all the words we might want to use in constructing analogies are there in LLMs, but what procedure can we use to make those analogies explicit so that we can examine them? Then, given candidate analogies, what procedure do we use to find (potentially) useful ones?

The process of evaluating positions in the chess tree is difficult and tricky, but is a tractable one, one that is understood well enough that the game of chess is considered to be a solved problem. It’s one thing to have an LLM explicate candidate analogies. At first glance, it seems to do that rather well. But whether or not an analogy is actually useful, that seems to be a function our overall state of knowledge in some domain. That’s a bit tricker than evaluating chess positions. The chess tree gives us the context of any given board position.

What context is relevant for evaluating my Girardian interpretation of Jaws and how do you make it computable? I suppose we can think of “film criticism” as a label for the relevant context. Academic film criticism exists in an institutional structure that has loose procedures for conducting such evaluation. Since I’ve posted that argument to several online document repositories, that work now exists at the outer margin of that institutional structure. Now, how to we reconstruct that entire apparatus within a computer system (assuming that’s a reasonable thing to do)?

One can easily imagine a similar story about institutional context for the analogy between double-entry bookkeeping and the phenomenon of supply and demand.

Given these considerations, loose and informal as they are, I am reluctant to give credence to breezy assertions about A.I.’s becoming more intelligent than Einstein or von Neumann. It’s easy enough to imagine that such individuals would have high scores on intelligence tests, but are such scores a reasonable proxy for their intellectual accomplishments, which exist in a universe of other thinkers with whom they interact? If it is those accomplishments that we care about, if it is such accomplishments that we are placing in competition with the accomplishments of future A.I.s, it’s not clear to me how to isolate those accomplishment from the general social matrix of intellectual activity. And that goes for the accomplishments of future A.I.s. Where would the Enterprise computer be without Spock to ask it the right questions?

Appendix: ChatGPT explains how supply and demand is like double-entry bookkeeping

In what way is double-entry bookkeeping analogous to supply and demand?

Double-entry bookkeeping and the economic principle of supply and demand are two concepts from different domains—accounting and economics, respectively. While they serve different purposes, it's possible to draw some analogies between them in terms of balancing accounts and market dynamics. Here's an exploration of how double-entry bookkeeping is analogous to supply and demand:

1. Balancing Equations:
In double-entry bookkeeping, the fundamental principle is that every financial transaction involves equal debits and credits, ensuring that the accounting equation (Assets = Liabilities + Equity) remains balanced. This concept of balance is analogous to the equilibrium in supply and demand.
In supply and demand, equilibrium occurs when the quantity supplied equals the quantity demanded in a market. This balance is a key feature of market dynamics, much like the balance in double-entry bookkeeping.
2. Interactions of Forces:
In double-entry bookkeeping, debits and credits are like opposing forces that need to be equal and opposite to maintain balance. Similarly, in supply and demand, there are opposing forces: an increase in supply tends to lower prices, while an increase in demand tends to raise prices. The interaction of these forces results in an equilibrium point.
3. Adjustments and Changes:
In both systems, adjustments are made to maintain balance. In double-entry bookkeeping, adjusting entries are made to ensure that debits equal credits. In supply and demand, changes in market conditions or external factors can lead to adjustments in prices and quantities to maintain equilibrium.
4. Dynamic Nature:
Both double-entry bookkeeping and supply and demand are dynamic systems. They respond to changes over time, and adjustments are made to maintain balance or equilibrium. In bookkeeping, new transactions are recorded, and in economics, market conditions change, affecting supply and demand.
5. Market Clearing:
In economics, the market-clearing price and quantity occur at the point where supply equals demand. This is analogous to the balanced state in double-entry bookkeeping, where debits and credits are equal. In both cases, there is a state of equilibrium that represents a form of balance.

It's important to note that while there are these analogies, double-entry bookkeeping and supply and demand operate in different contexts and serve distinct purposes. Double-entry bookkeeping is a method for recording and managing financial transactions within a business, ensuring accuracy and balance in accounting. On the other hand, supply and demand is a foundational principle in economics, describing the interactions between buyers and sellers in a market.

The analogies drawn here highlight the idea of balance, equilibrium, and the need for adjustments in response to changes. However, it's crucial to recognize the specific contexts and principles that govern each of these concepts in their respective fields.

No comments:

Post a Comment