Tuesday, March 17, 2015

"Intelligence" is bullsh•t, and AI is chasing a mirage

In MIT's Tech Review, a proposal for a revised Turing Test:
Riedl agrees that the test should be broad: “Humans have broad capabilities. Conversation is just one aspect of human intelligence. Creativity is another. Problem solving and knowledge are others.”

With this in mind, Riedl has designed one alternative to the Turing test, which he has dubbed the Lovelace 2.0 test (a reference to Ada Lovelace, a 19th-century English mathematician who programmed a seminal calculating machine). Riedl’s test would focus on creative intelligence, with a human judge challenging a computer to create something: a story, poem, or drawing. The judge would also issue specific criteria. “For example, the judge may ask for a drawing of a poodle climbing the Empire State Building,” he says. “If the AI succeeds, we do not know if it is because the challenge was too easy or not. Therefore, the judge can iteratively issue more challenges with more difficult criteria until the computer system finally fails. The number of rounds passed produces a score.”

Riedl’s test might not be the ideal successor to the Turing test. But it seems better than setting any single goal. “I think it is ultimately futile to place a definitive boundary at which something is deemed intelligent or not,” Riedl says. “Who is to say being above a certain score is intelligent or being below is unintelligent? Would we ever ask such a question of humans?”
Eh. From Benzon and Hays, The Evolution of Cognition (1990; emphasis mine):
A game of chess between a computer program and a human master is just as profoundly silly as a race between a horse-drawn stagecoach and a train. But the silliness is hard to see at the time. At the time it seems necessary to establish a purpose for humankind by asserting that we have capacities that it does not. It is truly difficult to give up the notion that one has to add "because . . ." to the assertion "I'm important." But the evolution of technology will eventually invalidate any claim that follows "because." Sooner or later we will create a technology capable of doing what, heretofore, only we could.

Perhaps adults who, as children, grow up with computers might not find these issues so troublesome. Sherry Turkle (1984) reports conversations of young children who routinely play with toys which "speak" to them—toys which teach spelling, dolls with a repertoire of phrases.The speaking is implemented by relatively simple computers. For these children the question about the difference between living things and inanimate things—the first ontological distinction which children learn (Keil 1979)—includes whether or not they can "talk," or "think,"which these computer toys can do.

These criteria do not show up in earlier studies of children's thought (cf. Piaget 1929). For children who have not had exposure to such toys it is perfectly sensible to make the capacity for autonomous motion the crucial criterion. To be sure it will lead to a misjudgment about carsand trains, but the point is that there is no reason for the child to make thinking and talking a criterion. The only creatures who think and talk are people, and people also move. Thus thinking and talking won't add anything to the basic distinction between living and inanimate things which isn't covered by movement.

No comments:

Post a Comment