Scott Aaronson has hosted Steven Pinker to a discussion at Shtetl-Optimized.
Pinker on AGI:
Regarding the second, engineering question of whether scaling up deep-learning models will “get us to Artificial General Intelligence”: I think the question is probably ill-conceived, because I think the concept of “general intelligence” is meaningless. (I’m not referring to the psychometric variable g, also called “general intelligence,” namely the principal component of correlated variation across IQ subtests. This is a variable that aggregates many contributors to the brain’s efficiency such as cortical thickness and neural transmission speed, but it is not a mechanism (just as “horsepower” is a meaningful variable, but it doesn’t explain how cars move.) I find most characterizations of AGI to be either circular (such as “smarter than humans in every way,” begging the question of what “smarter” means) or mystical—a kind of omniscient, omnipotent, and clairvoyant power to solve any problem. No logician has ever outlined a normative model of what general intelligence would consist of, and even Turing swapped it out for the problem of fooling an observer, which spawned 70 years of unhelpful reminders of how easy it is to fool an observer.
If we do try to define “intelligence” in terms of mechanism rather than magic, it seems to me it would be something like “the ability to use information to attain a goal in an environment.” (“Use information” is shorthand for performing computations that embody laws that govern the world, namely logic, cause and effect, and statistical regularities. “Attain a goal” is shorthand for optimizing the attainment of multiple goals, since different goals trade off.) Specifying the goal is critical to any definition of intelligence: a given strategy in basketball will be intelligent if you’re trying to win a game and stupid if you’re trying to throw it. So is the environment: a given strategy can be smart under NBA rules and stupid under college rules.
Since a goal itself is neither intelligent or unintelligent (Hume and all that), but must be exogenously built into a system, and since no physical system has clairvoyance for all the laws of the world it inhabits down to the last butterfly wing-flap, this implies that there are as many intelligences as there are goals and environments. There will be no omnipotent superintelligence or wonder algorithm (or singularity or AGI or existential threat or foom), just better and better gadgets.
Aaronson responds:
Basically, one side says that, while GPT-3 is of course mind-bogglingly impressive, and while it refuted confident predictions that no such thing would work, in the end it’s just a text-prediction engine that will run with any absurd premise it’s given, and it fails to model the world the way humans do. The other side says that, while GPT-3 is of course just a text-prediction engine that will run with any absurd premise it’s given, and while it fails to model the world the way humans do, in the end it’s mind-bogglingly impressive, and it refuted confident predictions that no such thing would work.
Though I’m with Pinker on the definition of AGI, I also take the second of the two positions Aaronson set forth, which is, I take it, Aaronson’s position while the first is Pinker’s position. That’s why I wrote GPT-3: Waterloo or Rubicon? Here be Dragons (Version 4.1).
Aaronson continues:
I freely admit that I have no principled definition of “general intelligence,” let alone of “superintelligence.” To my mind, though, there’s a simple proof-of-principle that there’s something an AI could do that pretty much any of us would call “superintelligent.” Namely, it could say whatever Albert Einstein would say in a given situation, while thinking a thousand times faster. Feed the AI all the information about physics that the historical Einstein had in 1904, for example, and it would discover special relativity in a few hours, followed by general relativity a few days later. Give the AI a year, and it would think … well, whatever thoughts Einstein would’ve thought, if he’d had a millennium in peak mental condition to think them.
If nothing else, this AI could work by simulating Einstein’s brain neuron-by-neuron—provided we believe in the computational theory of mind, as I’m assuming we do. It’s true that we don’t know the detailed structure of Einstein’s brain in order to simulate it [...]. But that’s irrelevant to the argument. It’s also true that the AI won’t experience the same environment that Einstein would have—so, alright, imagine putting it in a very comfortable simulated study, and letting it interact with the world’s flesh-based physicists. A-Einstein can even propose experiments for the human physicists to do—he’ll just have to wait an excruciatingly long subjective time for their answers. But that’s OK: as an AI, he never gets old.
Next let’s throw into the mix AI Von Neumann, AI Ramanujan, AI Jane Austen, even AI Steven Pinker—all, of course, sped up 1,000x compared to their meat versions, even able to interact with thousands of sped-up copies of themselves and other scientists and artists. Do we agree that these entities quickly become the predominant intellectual force on earth—to the point where there’s little for the original humans left to do but understand and implement the AIs’ outputs (and, of course, eat, drink, and enjoy their lives, assuming the AIs can’t or don’t want to prevent that)?
Eh. Now that I have an explicit definition of artificial minds, I have no need for a definition of artificial intelligence. While my primer (Relational Nets Over Attractors, A Primer: Part 1, Design for a Mind) is mostly about the human mind and the human brain, the fact that I was able to propose a substrate-neutral definition of “mind” has the side-effect that I can talk about artificial minds as mechanisms, not magic, to use Pinker’s formulation.
Aaronson also notes:
I should clarify that, in practice, I don’t expect AGI to work by slavishly emulating humans—and not only because of the practical difficulties of scanning brains, especially deceased ones. Like with airplanes, like with existing deep learning, I expect future AIs to take some inspiration from the natural world but also to depart from it whenever convenient. The point is that, since there’s something that would plainly count as “superintelligence,” the question of whether it can be achieved is therefore “merely” an engineering question, not a philosophical one.
That is consistent with the view I have articulated in the primer.
Aaronson has more to say, as does Pinker. As of this moment, the dialog has attracted 100 comments (including two from me). It’s worth exploring.
No comments:
Post a Comment