Tuesday, December 19, 2023

Stakes in the Sand: Prediction, Cultural Ranks, and the Fourth Arena [+LLM Bonus]

Prediction is a tricky business. Since the motion of the planets is governed by mechanical laws which we understand, predicting their positions is a relatively straightforward matter. It is my understanding, though, that over the long term (several 10s of millions of years) their motions are chaotic in the mathematical sense of the word. Why? Because of weak gravitational effects among them.

Similarly, earth’s weather system is governed by mechanical laws which we understand. However, chaotic effects are relatively large, gathering high-resolution data on which to base predictions is difficult, and the computational resources needed are huge. Consequently, predictions for a few days are accurate enough to be useful, but usefulness tapers off rapidly thereafter.

Predicting human affairs is extraordinarily tricky. A great deal of effort and technical sophistication is routinely deployed in predicting economic events – fluctuations in securities markets, inflation, government revenues, etc. – the outcomes of elections. These efforts are not entirely useless.

And then we have the various efforts of the Rationalist community to predict the arrival of AGI and the extermination of humankind by Superintelligence. I regard these as epistemic theater more akin to divination than real prediction.

The rest of this post is about some of my own efforts at prediction.

Prospero Computing System

Back in 1976 David Hays and I reviewed the computational linguistics literature for a journal that was then called Computers and the Humanities. We began the article by defining computational linguistics and concluded it with a fantasy, a computer program so powerful that it was capable of reading Shakespeare texts in a way that was interesting but not human. We called it Prospero. It was a reasonable fantasy at the time. I figured that we might have such a Prospero system in twenty years. Hays knew better and refused to put dates on such fantasies.

Well, 20 years from 1976 would be 1996. No such system existed at that time, nor was any on the horizon. Whoops! Got that wrong.

The fact is, my attention was elsewhere at the time and I didn’t even notice that history had falsified by youthful prediction. I don’t recall just when I noticed that failure. I don’t even know whether it was before or after the turn of the millennium. Whenever it was, I noted it, but was not distressed. A whole new intellectual landscape had grown up and that’s where my attention was.

These days, of course LLMs can “read” Shakespeare in some sense. But not in the sense that Hays and I had in mind back in mid-1970s. We were thinking of a system that, in the first place, could reasonably be said to simulate the operations of the human mind, with explicit arguments based on empirical evidence validating the simulation. That still doesn’t exist and I hesitate to ‘predict’ when it might. In the second place, this system would be transparent such that, when it had read a Shakespeare play (or any other work of literature) we could look under the hood, as it were, and follow its operations. Regardless of the extent to which LLMs can be said to simulate the operations of the human mind, we can’t observe their inner workings.

With one exception, which I’ll get to at the end of this post, I see little point in making predictions the future course of work in AI, though I occasionally do so, more or less as a way of participating in current discussions.

Cognitive Evolution

In the summer of 1981 I was part of a team NASA put together to make recommendations about what NASA should do to become current with AI. The team produced a two-volume report, with the second volume being various documents written by individuals on particular issues. I contributed something I called, Executive Guide to the Computer Age, which contained the following illustration:

That diagram was based on a chapter in my 1978 doctoral dissertation, Cognitive Science and Literary Theory (Department of English, SUNY Buffalo). The point of the diagram is obvious, culture is evolving at an increasing rate which seems to converge on the present, where we find, among many other things, research in artificial intelligence.

That diagram doesn’t predict anything but it has implications for how we think about the future in the near and mid-term – and forget about the long term. Something BIG is going on.

David Hays and I refined the underlying analysis in a paper we published in 1990, The Evolution of Cognition. In that paper we refined the analysis I my dissertation by identifying each rank, as we called them, in that diagram with a with an advance in informatics, from speech, to writing, to calculation, and finally to computation. When then explained why each informatic advance enabled the construction of new systems of thought.

In our discussion of the current era we observed:

Beyond this, there are researchers who think it inevitable that computers will surpass human intelligence and some who think that, at some time, it will be possible for people to achieve a peculiar kind of immortality by “downloading” their minds to a computer. As far as we can tell such speculation has no ground in either current practice or theory. It is projective fantasy, projection made easy, perhaps inevitable, by the ontological ambiguity of the computer. We still do, and forever will, put souls into things we cannot understand, and project onto them our own hostility and sexuality, and so forth.

A game of chess between a computer program and a human master is just as profoundly silly as a race between a horse-drawn stagecoach and a train. But the silliness is hard to see at the time. At the time it seems necessary to establish a purpose for humankind by asserting that we have capacities that it does not. It is truly difficult to give up the notion that one has to add “because... “ to the assertion “I’m important.” But the evolution of technology will eventually invalidate any claim that follows “because.” Sooner or later we will create a technology capable of doing what, heretofore, only we could.

Note that we wrote this in 1990, before Deep Blue beat Kasparov in a chess match in 1997. With that in mind, read out last sentence again: Sooner or later we will create a technology capable of doing what, heretofore, only we could. We didn’t attach any dates to that statement.

I suppose that, in view of that statement, I could say that nothing that’s happened in the last 30 years surprises me. But that’s not at all true. Developments in deep learning in the second decade of the millennium surprised me and, of course, GPT-3 came as a bit of a shock. But the long term course of what’s going on, unless we screw it up, which is certainly possible, unless we screw things up, we’re moving into a world where we partner with increasingly intelligent machines.

Cosmic Evolution

Where’s that taking us? Consider an article I published in 3 Quarks Daily on June 20, 2022: Welcome to the Fourth Arena – The World is Gifted, Here’s how it begins:

The First Arena is that of inanimate matter, which began when the universe did, fourteen billion years ago. About four billion years ago life emerged, the Second Arena. Of course we’re talking about our local region of the universe. For all we know life may have emerged in other regions as well, perhaps even earlier, perhaps more recently. We don’t know. The Third Arena is that of human culture. We have changed the face of the earth, have touched the moon and the planets, and are reaching for the stars. That happened between two and three million years ago, the exact number hardly matters. But most of the cultural activity is little more than 10,000 years old.

The question I am asking: Is there something beyond culture, something just beginning to emerge? If so, what might it be?

THAT’s what’s going to emerge from a partnership between humans and intelligent machines. A fundamentally new phenomenon, beyond life and beyond culture. Just what that’s going to be like, I cannot say. That’s in the realm of science fiction.

Intelligibility of LLMs

Finally, we have the question: What’s going on inside large language models? That’s a special case of the more general question: What’s going on inside artificial neural nets? I think that by the end of 2024 we will know enough about the internal processes of LLMs that worries about their unintelligibility will be diminishing at a satisfying pace (except perhaps at LessWrong, where the prospect of intelligibility is as likely to cause anxiety to increase). Instead, we will be figuring out how to index them and how to use that index to gain more reliable control over them.

Unfortunately I cannot offer a strong argument on this. I’ve been spending a lot of time working with ChatGPT and have so far completed a dozen or so working reports – the top dozen reports on this list. That work in itself does not add up to the argument the previous paragraph begs for.

I’ve not stopped working, I have ideas that aren’t in those reports, and I have a collaborator, Visvanathan Ramesh at Goethe University, Frankfurt.

We shall see.

No comments:

Post a Comment