I’ve been recently arguing that AI has been dominated by chess from the beginning. I found this interesting passage in Luke Muehlhauser’s useful survey, What should we learn from past AI forecasts? (2016). He quotes from a 1983 book by Edward Feigenbaum and Pamela McCorduck (The Fifth Generation: Artificial intelligence and Japan's computer challenge to the world, p. 38):
These young [AI scientists of the 1950s and 60s] were explicit in their faith that if you could penetrate to the essence of great chess playing, you would have penetrated to the core of human intellectual behavior. No use to say from here that somebody should have paid attention to all the brilliant chess players who are otherwise not exceptional, or all the brilliant people who play mediocre chess. This first group of artificial intelligence researchers… was persuaded that certain great, underlying principles characterized all intelligent behavior and could be isolated in chess as easily as anyplace else, and then applied to other endeavors that required intelligence.
That’s an utterly remarkable statement.
The thing about chess is that, 1) like tic-tac-toe, it is a finite game, and 2) its ‘footprint’ in the physical world is trivial. Chess isn’t about dealing with the physical world; it’s about dealing with one’s opponent. Once AI confronted the richeness and complexity of the physical world, as it did with vision and with natural language, it crashed and burned.
Now, here’s Kevin Kelly (“The Myth of a Superhuman AI,” Wired Magazine, April 25, 2017) on what he calls “thinkism”:
Many proponents of an explosion of intelligence expect it will produce an explosion of progress. I call this mythical belief “thinkism.” It’s the fallacy that future levels of progress are only hindered by a lack of thinking power, or intelligence. [...]
Let’s take curing cancer or prolonging longevity. These are problems that thinking alone cannot solve. No amount of thinkism will discover how the cell ages, or how telomeres fall off. No intelligence, no matter how super duper, can figure out how the human body works simply by reading all the known scientific literature in the world today and then contemplating it. No super AI can simply think about all the current and past nuclear fission experiments and then come up with working nuclear fusion in a day. A lot more than just thinking is needed to move between not knowing how things work and knowing how they work.
And so on.
It seems to me that thinkism is the natural outgrowth of a discipline that has modeled itself on the conquest of chess. For chess IS something that is divorced from the physical world, something one can deal with through pure thought.
The alure of this conception is so strong that it seems to persist even in problem domains where it is not appropriate. I’m not sure why that is the case. Perhaps it is simply that it would be convenient if that were so, given that computation takes place in an abstract world at several removes from the richness and messiness of the physical.
No comments:
Post a Comment