Artificial intelligence was ambitious from its beginnings in the mid-1950s; this or that practitioner would confidently predict that before long computers could perform any mental activity that humans could. As a practical matter, however, AI systems tended to be narrowly focused. The field’s grander ambitions seemed ever in retreat. Finally, at long last, one of those grand ambitions was realized. In 1997 IBM’s Deep Blue beat world champion Gary Kasparov in chess.
But that was only chess. Humans remained ahead of computers in all other spheres and AI kept cranking out narrowly focused systems. Why? Because cognitive and perceptual competence turned out to require detailed procedures. The only way to accumulate the necessary density of detail was to focus on a narrow domain.
Meanwhile, in 1993 Verner Vinge delivered a paper, Technological Singularity, at a NASA Symposium and then published it in The Whole Earth Review.
Progress in hardware has followed an amazingly steady curve in the last few decades. Based on this trend, I believe that the creation of greater-than-human intelligence will occur during the next thirty years. (Charles Platt has pointed out that AI enthusiasts have been making claims like this for thirty years. Just so I'm not guilty of a relative-time ambiguity, let me be more specific: I'll be surprised if this event occurs before 2005 or after 2030.)
What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities – on a still-shorter time scale. The best analogy I see is to the evolutionary past: Animals can adapt to –oblems and make inventions, but often no faster than natural selection can do its work – the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct what-if’s in our heads; we can solve many problems thousands of times faster than natural selection could. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals.
This change will be a throwing-away of all the human rules, perhaps in the blink of an eye – an exponential runaway beyond any hope of control. Developments that were thought might only happen in “a million years: (if ever) will likely happen in the next century.
That got people attention, at least in some tech-focused circles, and provided a new focal point for thinking about artificial intelligence and the future. It’s one thing to predict and hope for intelligent machines which will do this that and the other. But rewriting the nature of history, top to bottom, that’s something else again.
In the mid-2000s Ben Goertzel and others felt the need to rebrand AI in a way more suited to the grand possibilities that lay ahead. Goertzel noted:
In 2002 or so, Cassio Pennachin and I were editing a book on approaches to powerful AI, with broad capabilities at the human level and beyond, and we were struggling for a title. The provisional title was “Real AI” but I knew that was too controversial. So I emailed a bunch of friends asking for better suggestions. Shane Legg, an AI researcher who had worked for me previously, came up with Artificial General Intelligence. I didn’t love it tremendously but I fairly soon came to the conclusion it was better than any of the alternative suggestions. So Cassio and I used the term for the book title (the book “Artificial General Intelligence” was eventually published by Springer in 2005), and I began using it more broadly.
Goertzel realized term had its limitations. “Intelligence” is a vague idea and, whatever it means, “no real-world intelligence will ever be totally general.” Still
It seems to be catching on reasonably well, both in the scientific community and in the futurist media. Long live AGI!
My point, then, is that the term did not refer to a specific technology or set of mechanisms. It was an aspirational term, not a technical one.
And so it remains. As Jack Clark tweeted a few weeks ago:
Discussions about AGI tend to be pointless as no one has a precise definition of AGI, and most people have radically different definitions. In many ways, AGI feels more like a shibboleth used to understand if someone is in- or out-group wrt some issues.
— Jack Clark (@jackclarkSF) August 6, 2022
For the matter, “the Singularity” is a shibboleth as well. The two of them tend to travel together.
The impulse behind AGI is to rescue AI from its narrow concerns and focus our attention on grand possibilities for the future.
There's deleted content in line 5 from your OP- <>
ReplyDelete