Pages in this blog

Tuesday, August 23, 2022

Which comes first, AGI or new systems of thought? [Further thoughts on the Pinker/Aaronson debates]

Long-time readers of New Savanna know that David Hays and I have a model of cultural evolution built on the idea of cognitive rank, systems of thought embedded in large-scale cognitive architecture. Within that context I have argued that we are currently undergoing a large-scale transformation comparable to those that gave us the Industrial Revolution (Rank 3 in our model) and, more recently, the conceptual revolutions of the first half of the 20th century (Rank 4). Thus I have suggested that the REAL singularity is not the fabled tech singularity, but the consolidation of new conceptual architectures:

Redefining the Coming Singularity – It’s not what you think, Version 2, Working Paper, November 2015, https://www.academia.edu/8847096/Redefining_the_Coming_Singularity_It_s_not_what_you_think

I had occasion to introduce this idea into the recent AI debate between Steven Pinker and Scott Aaronson. Pinker was asking for specific mechanisms underpinning superintelligence while Aaronson was offering what Steve called “superpowers.” Thus Pinker remarked:

If you’ll forgive me one more analogy, I think “superintelligence” is like “superpower.” Anyone can define “superpower” as “flight, superhuman strength, X-ray vision, heat vision, cold breath, super-speed, enhance hearing, and nigh-invulnerability.” Anyone could imagine it, and recognize it when he or she sees it. But that does not mean that there exists a highly advanced physiology called “superpower” that is possessed by refugees from Krypton! It does not mean that anabolic steroids, because they increase speed and strength, can be “scaled” to yield superpowers. And a skeptic who makes these points is not quibbling over the meaning of the word superpower, nor would he or she balk at applying the word upon meeting a real-life Superman. Their point is that we almost certainly will never, in fact, meet a real-life Superman. That’s because he’s defined by human imagination, not by an understanding of how things work. We will, of course, encounter machines that are faster than humans, and that see X-rays, that fly, and so on, each exploiting the relevant technology, but “superpower” would be an utterly useless way of understanding them.

I’ve added my comment below the asterisks.

* * * * *

I’m sympathetic with Pinker and I think I know where he’s coming from. Thus he’s done a lot of work on verb forms, regular and irregular, that involves the details of (computational) mechanisms. I like mechanisms as well, though I’ve worried about different ones than he has. For example, I’m interested in (mostly) literary texts and movies that have the form: A, B, C...X...C’, B’, A’. Some examples: Gojira (1954), the original 1933 King Kong, Pulp Fiction, Obama’s eulogy for Clementa Pinkney, Joseph Conrad’s Heart of Darkness, Shakespeare’s Hamlet, and Osamu Tezuka’s Metropolis.

What kind of computational process produces such texts and what kind of computational process is involved in comprehending them? Whatever that process is, it’s running in the human brain, whose mechanisms are obscure. There was a time when I tried writing something like pseudo-code to generate one or two such texts, but that never got very far. So these days I’m satisfied identifying and describing such texts. It’s not rocket science, but it’s not trivial either. It involves a bit of luck and a lot of detail work.

So, like Steve, I have trouble with mechanism-free definitions of AGI and superintelligence. When he contrasts defining intelligence as mechanism vs. magic, as he did earlier, I like that, as I like his current contrast between “intelligence as an undefined superpower rather than a[s] mechanisms with a makeup that determines what it can and can’t do.”

In contrast Gary Marcus has been arguing for the importance of symbolic systems in AI in addition to neural networks, often with Yann LeCun as his target. I’ve followed this debate fairly carefully, and even weighed in here and there. This debate is about mechanisms, mechanisms for computers, in the mind, for the near-term and far-term.

Whatever your current debate with Steve is about, it’s not about this kind of mechanism vs. that kind. It has a different flavor. It’s more about definitions, even, if you will, metaphysics. But, for the sake of argument I’ll grant that, sure, the concept of intellectual superpowers is coherent (even if we have little idea about how’d they’d work beyond MORE COMPUTE!).

With that in mind, you say:

Not only does the concept of “superpowers” seem coherent to me, but from the perspective of someone a few centuries ago, we arguably have superpowers—the ability to summon any of several billion people onto a handheld video screen at a moment’s notice, etc. etc. You’d probably reply that AI should be thought of the same way: just more tools that will enhance our capabilities, like airplanes or smartphones, not some terrifying science-fiction fantasy.

I like the way you’ve introduced cultural evolution into the conversation, as that’s something I’ve thought about a great deal. Mark Twain wrote a very amusing book, A Connecticut Yankee in King Arthur’s Court. From the Wikipedia description:

In the book, a Yankee engineer from Connecticut named Hank Morgan receives a severe blow to the head and is somehow transported in time and space to England during the reign of King Arthur. After some initial confusion and his capture by one of Arthur's knights, Hank realizes that he is actually in the past, and he uses his knowledge to make people believe that he is a powerful magician.

Is it possible that in the future there will be human beings as far beyond us as that Yankee engineer was beyond King Arthur and Merlin? It seems to me that, providing we avoid disasters like nuking ourselves back to the Stone Age, catastrophic climate change exacerbated by pandemics, and getting paperclipped by an absentminded Superintelligence, it seems to me almost inevitable that that will happen. Of course science fiction is filled with such people but, alas, has not a hint of the theories that give them such powers. But I’m not talking about science fiction futures. I’m talking about the real future. Over the long haul we have produced ever more powerful accounts of how the world works and ever more sophisticated technologies through which we have transformed the world. I see no reason why that should come to a stop.

So, at the moment various researchers are investigating the parameters of scale in LLMs. What are the effects of differing numbers of tokens in the training corpus and number of parameters in the model? Others are poking around inside the models to see what’s going on in various layers. Still others are comparing the response characteristics of individual units in artificial neural nets with the response characteristics of neurons in biological visual systems. And so and on and so forth. We’re developing a lot of empirical knowledge about how these systems work, and models here and there.

I have no trouble at all imagining a future in which we will know a lot more about how these artificial models work internally and how natural brains work as well. Perhaps we’ll even be able to create new AI systems in the way we create new automobiles. We specify the desired performance characteristics and then use our accumulated engineering knowledge and scientific theory to craft a system that meets those specifications.

It seems to me that’s at least as likely as an AI system spontaneously tipping into the FOOM regime and then paperclipping us. Can I predict when this will happen? No. But then I regard various attempts to predict the arrival of AGI (whether through simple Moore’s Law type extrapolation or the more heroic efforts of Open Philanthropy’s biological anchors) as mostly epistemic theater.

No comments:

Post a Comment