Thursday, February 9, 2023

What are the 10–20 year prospects for AI?

First, while I do mean AI in general, most of my comments will be based on language technology (large language models, LLMs) because 1) that’s what I’ve been thinking about ever since the launch of ChatGPT, and 2) my understanding of language, of human language not just computational approaches to language, is broader and deeper than my understanding of vision and visual media, and even music.

My hope, not a prediction, hope, is that it will become REAL. What do I mean by that? That’s what this post is about.

Venture capitalists have three time horizons

My friend in venture capital, Sean O’Sullivan (who was my boss at MapInfo in the ancient days), tells me there are three time-horizons: 3 months, 12 months, and three years. So, in talking about 10-20 years I’m way out over the end of my skis. That’s fine.

But let’s begin by looking at those near-term prospects, the ones on which money is ventured – and lost or gained. If we set the clock at November 31, 2022, when ChatGPT was released to the public, then we are over 2/3 of the way into the first time-horizon. What has happened?

WOOSH!!! That’s what.

The public at large is more aware of AI than they have been before. In particular, the number of people who have been able to interact directly with an advanced AI (as opposed to Siri, Alex, and the like) has gone up dramatically, though, with more than 30 million users world-wide, it would still be less than 10% of the population of the United States. And that’s a lot.

Note, however, that secondary schools and colleges and universities are now scrambling to formulate policies and plans for dealing with ChatGPT in the classroom and research. Last week I answered a survey seeking ideas about the inclusion of ChatGPT as an author on research papers. Anything that impacts the educational system is going to have a very wide influence indeed.

Microsoft’s investment in OpenAI, the company that produces ChatGPT, has gone up and Microsoft is integrating its capabilities into various products and is on the verge of releasing its souped-up version of Bing, its search engine. Google has done to DEFCON 1 and is preparing to release a souped-up version of its search engine. I have no idea what Meta is planning, but surely they’re working on something, perhaps (almost certainly) involving the Metaverse. And, judging from comments by Yann LeCun, their VP for AI, their plans go well beyond LLMs.

How is this going to play out over the coming year? More of the same, I suppose. And the same for the 3-year horizon. A lot of the investments and entrepreneurial ventures will fail but some won’t. That’s just how things are. But three years is much too soon for things to settle down.

Why then even bother to speculate about 10 or 20 years down the road? Because I think too many eggs are going into the wrong basket and there won’t be enough investment in other technologies. By wrong basket I mean current deep learning architectures and extensions of them.

Gary Marcus has been banging the drum for symbolic AI (aka GOFAI), not that we abandon deep learning, but that we add symbolic AI, into the mix. For what it’s worth, David “IBM’s Watson” Ferrucci believes that as well and has founded a company, Elemental Cognition, that integrates deep learning and explicit logical reasoning. I agree them. But I fear it will take time for that message to sink in.

My current thinking

Meanwhile, I’ve been thinking things over. In particular, I’ve been thinking about mechanistic interpretability, which is a set of methods and interested within the larger scope of explainable AI. Mechanistic interpretability is the practice of reverse engineering the internal activities of neural nets, including LLMs. I’ve been pondering the question: What guidance can we find in the careful analysis of the output of LLMs, such as the one driving ChatGPT?

The final paragraph from my post, ChatGPT: Tantalizing afterthoughts in search of story trajectories [induction heads] (February 2) expresses my optimism about mechanistic interpretability:

Does anyone want to wager on when the opaqueness of advanced LLMs gives way to translucency? What about transparency? Those strike me as being more sensible wagers than betting on the emergence of AGI. The emergence of AGI depends on luck and magic. Figuring out how deep neural nets work requires only insight, hard work, and time.

This is from end of my working paper, ChatGPT intimates a tantalizing future (third revision, which I posted a couple of days ago):

Large language models already seem to be using symbolic mechanisms even though they were not designed to do so. Those symbolic mechanisms have simply emerged, albeit implicitly. What will it take to make them explicit?

We do not want to hand-code symbolic representations, like researchers did in the GOFAI era. The universe of discourse is too large and complex for that. Rather we need some way to take a language model and bootstrap explicit symbolic representations into it. It is one thing to “bolt” a symbolic system onto neural net. How do we get the symbolic system to emerge from the neural net, as it does in humans?

During the first year and a half of life, children acquire a rich stock of ‘knowledge’ about the physical world and about interacting with others. They bring that knowledge with them as their interaction with others broadens to include language. They learn language by conversing. In this way word forms become indexes into their mental model of the world.

Can we figure out a similar approach for artificial systems? They can converse with us, and with each other. How do we make it work?

Let’s think ahead to the next generation

I’m guessing that a decade or so should be enough time for deep learning to have run its course. Note that by this I do not mean that deep learning will disappear. I assume that it will become a permanent technology in the standard repertoire, but the novelty will have worn off and its limitations will be understood and widely accepted. A lot of venture capital will have been burned-through (the nature of the business). But we will also have seen a lot of interesting technology, some of it dead-ended, but some still alive and kicking.

When GPT-3 first appeared in the middle of 2020 I posted my quick take on the future of AI, which I reiterated at the end of December: Thoughts on the implications of GPT-3, two years ago and NOW [here be dragons, we're swimming, flying and talking with them]. I have no reason to revise that so soon.

Beyond that, it’s tricky. There is a reason venture capitalists don’t think beyond 3 years out; so many event streams are interacting that 3 years is the limit investment-worthy prediction. I don’t have any crystal ball. But I’m not so much concerned about the emergence of specific product streams. At the moment I’m thinking a bit more abstractly.

As you may know, back in the 1990s David Hays and I wrote a number of articles, both together and individually (and Hays wrote a book) on a theory of cultural ranks (here’s a guide). The general idea is that with the emergence of speech, writing, calculation, and computing over the last, say, 50K years and at inter-rank intervals decreasing by roughly an order of magnitude, each of those informatic technologies is able to support a more sophisticated family of architectures for thought, expression, and action. In private conversation we talked of a fifth cultural rank which we suspected was emerging around us. But we never published anything about it because it was too difficult to conceptualize. Conceptualizing the fourth rank, computing, was all we could manage. A fifth rank? What’s it about? What’s the new informatic technology?

THAT’s what’s on my mind. When scaling up on deep learning systems fails to take us all the way to AGI, or even noticeably closer, when superintelligence seems as chimerical as ever, will the “thought leaders” in this space sober up and think more seriously and deeply? Will we make more progress toward new systems of thought? That’s what I was thinking back in August when I posted, Which comes first, AGI or new systems of thought? [Further thoughts on the Pinker/Aaronson debates].

I have two reasons for not taking AGI and superintelligence seriously. On the one hand neither is very well-defined. As Steven Pinker remarked in one of his debates with Scott Aaronson, superintelligence is more like a superpower than a well-articulated technical concept. And so it is with AGI. My other reason, though, is simply that I have a different way of thinking about intellectual and technological advance. I have this theory of cultural ranks. To be sure, I can’t articulate the underpinnings of the next rank, the one we’re struggling toward at the moment, but I can point to the past, noting that in time we’ve always managed to develop new systems of thought that encompassed and eclipsed the old ones. What should that process stop? Why, in particular, should human thought stagnate while the machines we’ve created will surpass us?

For that is the unstated presupposition of those who keep predicting and hoping for the emergence of AGI. They seem to believe that human thought has come to a standstill and the best we can do is wait for the machines to evolve and hope they don’t destroy us. Though, some are hoping to stave off that development by either slowing down that emergence of AGI or figuring out how to “align” it with human values. There doesn’t seem to be the faintest hint of an inkling of an idea that human thought is advancing.

What I’m thinking is that the effort to advance beyond the technology we’ll have in a decade or so will take us up to the next level. One where we’re working on fundamentally new physical platforms for computing, where neuromorphic technology begins to emerge from the laboratory into practical deployment. Just as dogs and humans coevolved over tens of thousands of years early in human history, so we will co-evolve with these new neuromorphic devices into a world where thoughts of AGI and superintelligence seem as quaint as angels dancing on the head of a pin.

No comments:

Post a Comment