Iris Berent, Op-Ed: The real reason we’re afraid of robots, LA Times, July 26, 2020:
If you saw a ball start rolling all by itself, you’d be astonished. But you wouldn’t be the least bit surprised to see me spontaneously rise from my seat on the couch and head toward the refrigerator.
That is because we instinctively interpret the actions of physical objects, like balls, and living agents, like people, according to different sets of principles. In our intuitive psychology, objects like balls always obey the laws of physics —they move only by contact with other objects. People, in contrast, are agents who have minds of their own, which endow them with knowledge, beliefs, and goals that motivate them to move on their own accord. We thus ascribe human actions, not to external material forces, but to internal mental states.
Of course, most modern adults know that thought occurs in the physical brain. But deep down, we feel otherwise. Our unconscious intuitive psychology causes us to believe that thinking is free from the physical constraints on matter. Extensive psychological testing shows that this is true for people in all kinds of societies. The psychologist Paul Bloom suggests that intuitively, all people are dualists, believing that mind and matter are entirely distinct.
AI violates this bedrock belief. Siri and Roomba are man-made artifacts, but they exhibit some of the same intelligent behavior that we typically ascribe to living agents. Their acts, like ours, are impelled by information (thinking), but their thinking arises from silicon, metal, plastic and glass. While in our intuitive psychology thinking minds, animacy and agency all go hand in hand, Siri and Roomba demonstrate that these properties can be severed — they think, but they are mindless; they are inanimate but semiautonomous.
David Hays and I pointed this out some time ago in our paper, "The Evolution of Cognition" (1990):
One of the problems we have with the computer is deciding what kind of thing it is, and therefore what sorts of tasks are suitable to it. The computer is ontologically ambiguous. Can it think, or only calculate? Is it a brain or only a machine?
The steam locomotive, the so-called iron horse, posed a similar problem for people at Rank 3. It is obviously a mechanism and it is inherently inanimate. Yet it is capable of autonomous motion, something heretofore only within the capacity of animals and humans. So, is it animate or not? Perhaps the key to acceptance of the iron horse was the adoption of a system of thought that permits separation of autonomous motion from autonomous decision. The iron horse is fearsome only if it may, at any time, choose to leave the tracks and come after you like a charging rhinoceros. Once the system of thought had shaken down in such a way that autonomous motion did not imply the capacity for decision, people made peace with the locomotive.
The computer is similarly ambiguous. It is clearly an inanimate machine. Yet we interact with it through language; a medium heretofore restricted to communication with other people. To be sure, computer languages are very restricted, but they are languages. They have words, punctuation marks, and syntactic rules. To learn to program computers we must extend our mechanisms for natural language.
As a consequence it is easy for many people to think of computers as people. Thus Joseph Weizenbaum, with considerable dis-ease and guilt, tells of discovering that his secretary “consults” Eliza—a simple program which mimics the responses of a psychotherapist—as though she were interacting with a real person (Weizenbaum 1976). ... We still do, and forever will, put souls into things we cannot understand, and project onto them our own hostility and sexuality, and so forth.
No comments:
Post a Comment