Pages in this blog

Thursday, December 29, 2022

Thoughts on the implications of GPT-3, two years ago and NOW [here be dragons, we're swimming, flying and talking with them]

When GPT-3 first came out, I registered my first reactions in a comment at Marginal Revolution, which I've appended immediately below the picture of Gojochan and Sparkychan. I'm currently completing a working paper about my interaction with ChatGPT. That will end with an appendix in which I repeat my remarks from two years ago and append some new ones. I've appended those after the comment to Marginal Revolution.

* * * * *


A bit revised from a comment I made at Marginal Revolution:

Yes, GPT-3 [may] be a game changer. But to get there from here we need to rethink a lot of things. And where that's going (that is, where I think it best should go) is more than I can do in a comment.

Right now, we're doing it wrong, headed in the wrong direction. AGI, a really good one, isn't going to be what we're imagining it to be, e.g. the Star Trek computer.

Think AI as platform, not feature (Andreessen). Obvious implication, the basic computer will be an AI-as-platform. Every human will get their own as an very young child. They're grow with it; it'll grow with them. The child will care for it as with a pet. Hence we have ethical obligations to them. As the child grows, so does the pet – the pet will likely have to migrate to other physical platforms from time to time.

Machine learning was the key breakthrough. Rodney Brooks' Gengis, with its subsumption architecture, was a key development as well, for it was directed at robots moving about in the world. FWIW Brooks has teamed up with Gary Marcus and they think we need to add some old school symbolic computing into the mix. I think they're right.

Machines, however, have a hard time learning the natural world as humans do. We're born primed to deal with that world with millions of years of evolutionary history behind us. Machines, alas, are a blank slate.

The native environment for computers is, of course, the computational environment. That's where to apply machine learning. Note that writing code is one of GPT-3's skills.

So, the AGI of the future, let's call it GPT-42, will be looking in two directions, toward the world of computers and toward the human world. It will be learning in both, but in different styles and to different ends. In its interaction with other artificial computational entities GPT-42 is in its native milieu. In its interaction with us, well, we'll necessarily be in the driver's seat.

Where are we with respect to the hockey stick growth curve? For the last 3/4 quarters of a century, since the end of WWII, we've been moving horizontally, along a plateau, developing tech. GPT-3 is one signal that we've reached the toe of the next curve. But to move up the curve, as I've said, we have to rethink the whole shebang.

We're IN the Singularity. Here be dragons.

[Superintelligent computers emerging out of the FOOM is bullshit.]

* * * * *

ADDENDUM: A friend of mine, David Porush, has reminded me that Neal Stephenson has written of such a tutor in The Diamond Age: Or, A Young Lady's Illustrated Primer (1995). I then remembered that I have played the role of such a tutor in real life, The Freedoniad: A Tale of Epic Adventure in which Two BFFs Travel the Universe and End up in Dunkirk, New York.

* * * * *

To the future and beyond!

I stand by those remarks from two years ago, but I want to comment on four things: 1) AI alignment, 2) the need for symbolic computing, 3) the need for new kinds of hardware, and 4) a future world in which humans and AIs interact freely.

Considerable effort has gone into tuning ChatGPT so that it won’t say things that are offensive (e.g. racial slurs) or give out dangerous information (e.g. how to hotwire cars). These efforts have not been entirely successful. This is one aspect of what is now being called “AI alignment.” In the extreme the field of AI alignment is oriented toward the possibility – which some see as a certainty – that in the future (somewhere between, say, 30 and 130 years) rogue AIs will wage a successful battle against humankind.[1] I don’t think that fear is very creditable, but, as the rollout of ChatGPT makes abundantly clear, AIs built on deep learning are unpredictable and even, in some measure, uncontrollable.

I think the problem is inherent in deep learning technology. Its job is to fit a model to, in the case of ChatGPT, an extremely large corpus of writing, much of the internet. That corpus, in turn, is ultimately about the world. The world is vast, irregular, and messy. That messiness is amplified by the messiness inherent in the human brain/mind, which did, after all, evolve to fit that world. Any AI engine capable of capturing a significant portion of the order inherent in our collective writing about the world has no choice but to encounter and incorporate some of the disorder and clutter into its model as well.

I regard such Foundation models[2], as they have come to be called, as wilderness preserves, digital wilderness. They contain what digital humanist Ted Underwood calls the latent space of culture. He says:

The immediate value of these models is often not to mimic individual language understanding, but to represent specific cultural practices (like styles or expository templates) so they can be studied and creatively remixed. This may be disappointing for disciplines that aspire to model general intelligence. But for historians and artists, cultural specificity is not disappointing. Intelligence only starts to interest us after it mixes with time to become a biased, limited pattern of collective life. Models of culture are exactly what we need.

In his penultimate paragraph Underwood notes:

I have suggested that approaching neural models as models of culture rather than intelligence or individual language use gives us even more reason to worry. But it also gives us more reason to hope. It is not entirely clear what we plan to gain by modeling intelligence, since we already have more than seven billion intelligences on the planet. By contrast, it’s easy to see how exploring spaces of possibility implied by the human past could support a more reflective and more adventurous approach to our future. I can imagine a world where generative models of culture are used grotesquely or locked down as IP for Netflix. But I can also imagine a world where fan communities use them to remix plot tropes and gender norms, making “mass culture” a more self-conscious, various, and participatory phenomenon than the twentieth century usually allowed it to become.

These digital wildness regions thus represent opportunities for discovery and elaboration. Alignment is simply one aspect of that process.

And by alignment I mean more than aligning the AI’s values with human values; I mean aligning its conceptual structure as well. That’s where “old school” symbolic computing enters the picture, especially language. Language  – not the mere word forms available in digital corpora, but word forms plus semantics and syntactic affordances –  is one of the chief ‘tools’ through which young humans are acculturated and through which human communities maintain their beliefs and practices. The full powers of language, as treated by classical symbolic systems, will be essential for “domesticating” the digital wilderness and developing it for human use.

However, this presents technical problems, problems I cannot go into here in any detail.[4] The basic issue is that symbolic computing involves one strategy for implementing, call it cogitation, in a physical system while the neural computing underlying deep learning requires a different physical implementation. These approaches are incompatible. While one can “bolt” a symbolic system onto a neural computing system, that strikes me as no more than an interim solution. It will get us started, indeed the work has already begun.[5]

What we want, though, is for the symbolic system to arise from the neural system, organically, as it does in humans.[6] This may well call for fundamentally new physical platforms for computing, platforms based on “neuromorphic” components that are “grown,” as Geoffrey Hinton has recently remarked.[7] That technology will give us a whole new world, one where humans, AIs and robots interact freely with one another, but will have communities of their own as well. We know that dogs co-evolved with humans over tens of thousand of years. These miraculous new devices will co-evolve with us over the coming decades and centuries.

Let us end with Miranda’s words from Shakespeare’s The Tempest:

“Oh wonder!
How many goodly creatures are there here!
How beauteous mankind is! Oh brave new world,
That has such [devices] in’t.”

* * * * *

[1] The virtual center of this belief is a website called LessWrong, which has extensive discussion of this issue going back well over a decade. Here it is, https://www.lesswrong.com/.

[2] Foundation models, Wikipedia, https://en.wikipedia.org/wiki/Foundation_models.

[3] Ted Underwood, Mapping the latent spaces of culture, The Stone and the Shell, Oct. 21, 2021, https://tedunderwood.com/2021/10/21/latent-spaces-of-culture/.

[4] I discuss this issue in this blog post, Physical constraints on computing, process and memory, Part 1 [LeCun], New Savanna, July 24, 2022, https://new-savanna.blogspot.com/2022/07/physical-constraints-on-computing.html.

[5] Consult the Wikipedia entry, Neuro-symbolic AI, for some pointers, https://en.wikipedia.org/wiki/Neuro-symbolic_AI.

[6] I discuss this in a recent working paper, Relational Nets Over Attractors, A Primer: Part 1, Design for a Mind, Version 2, Working Paper, July 13, 2022, pp. 76, https://www.academia.edu/81911617/Relational_Nets_Over_Attractors_A_Primer_Part_1_Design_for_a_Mind.

[7] Tiernan Ray, We will see a completely new type of computer, says AI pioneer Geoff Hinton, ZDNET, December 1, 2022, https://www.zdnet.com/article/we-will-see-a-completely-new-type-of-computer-says-ai-pioneer-geoff-hinton-mortal-computation/#ftag=COS-05-10aaa0j.

No comments:

Post a Comment