David Ferrucci:
“To me, there’s a very deep philosophical question that I think will rattle us more than the economic and social change that might occur,” Ferrucci said as we ate. “When machines can solve any given task more successfully than humans can, what happens to your sense of self? As humans, we went from the chief is the biggest and the strongest because he can hurt anyone to the chief is the smartest, right? How smart are you at figuring out social situations, or business situations, or solving complex science or engineering problems. If we get to the point where, hands down, you’d give a computer any task before you’d give a person any task, how do you value yourself?”
Ferrucci said that though he found Tegmark’s sensitivity to the apocalypse fascinating, he didn’t have a sense of impending doom. (He hasn’t signed Tegmark’s statement.) Some jobs would likely dissolve and policymakers would have to grapple with the social consequences of better machines, Ferrucci said, but this seemed to him just a fleeting transition. “I see the endgame as really good in a very powerful way, which is human beings get to do the things they really enjoy — exploring their minds, exploring thought processes, their conceptualizations of the world. Machines become thought-partners in this process.”
This reminded me of a report I’d read of a radical group in England that has proposed a ten-hour human workweek to come once we are dependent upon a class of beneficent robot labor. Their slogan: “Luxury for All.” So much of our reaction to artificial intelligence is relative. The billionaires fear usurpation, a loss of control. The middle-class engineers dream of leisure. The idea underlying Ferrucci’s vision of the endgame was that perhaps people simply aren’t suited for the complex cognitive tasks of work because, in some basic biological sense, we just weren’t made for it. But maybe we were made for something better.
From Benjamin Wallace-Wells, Jeopardy! Robot Watson Grows Up, New York Magazine, 20 May 2015.
William Benzon and David Hays:
One of the problems we have with the computer is deciding what kind of thing it is, and therefore what sorts of tasks are suitable to it. The computer is ontologically ambiguous. Can it think, or only calculate? Is it a brain or only a machine?
The steam locomotive, the so-called iron horse, posed a similar problem for people at Rank 3. It is obviously a mechanism and it is inherently inanimate. Yet it is capable of autonomous motion, something heretofore only within the capacity of animals and humans. So, is it animate or not? Perhaps the key to acceptance of the iron horse was the adoption of a system of thought that permits separation of autonomous motion from autonomous decision. The iron horse is fearsome only if it may, at any time, choose to leave the tracks and come after you like a charging rhinoceros. Once the system of thought had shaken down in such a way that autonomous motion did not imply the capacity for decision, people made peace with the locomotive.
The computer is similarly ambiguous. It is clearly an inanimate machine. Yet we interact with it through language; a medium heretofore restricted to communication with other people. To be sure, computer languages are very restricted, but they are languages. They have words, punctuation marks, and syntactic rules. To learn to program computers we must extend our mechanisms for natural language.
As a consequence it is easy for many people to think of computers as people. Thus Joseph Weizenbaum (1976), with considerable dis-ease and guilt, tells of discovering that his secretary "consults" Eliza--a simple program which mimics the responses of a psychotherapist--as though she were interacting with a real person. Beyond this, there are researchers who think it inevitable that computers will surpass human intelligence and some who think that, at some time, it will be possible for people to achieve a peculiar kind of immortality by "downloading" their minds to a computer. As far as we can tell such speculation has no ground in either current practice or theory. It is projective fantasy, projection made easy, perhaps inevitable, by the ontological ambiguity of the computer. We still do, and forever will, put souls into things we cannot understand, and project onto them our own hostility and sexuality, and so forth.
A game of chess between a computer program and a human master is just as profoundly silly as a race between a horse-drawn stagecoach and a train. But the silliness is hard to see at the time. At the time it seems necessary to establish a purpose for humankind by asserting that we have capacities that it does not. To give up the notion that one has to add "because . . . " to the assertion "I'm important" is truly difficult. But the evolution of technology will eventually invalidate any claim that follows "because." Sooner or later we will create a technology capable of doing what, heretofore, only we could.
From William Benzon and David G. Hays, The Evolution of Cognition, Journal of Social and Biological Structures 13(4): 297-320, 1990.
No comments:
Post a Comment