Pages in this blog

Sunday, May 23, 2010

They’re at it again, hacking the human mind, NOT

I’m referring to a recent post by John Horgan, which opens thus:
Scientists are on the verge of building an artificial brain! How do I know? Terry Sejnowski of the Salk Institute said so right here on ScientificAmerican.com. He wrote that the goal of reverse-engineering the brain—which the National Academy of Engineering recently posed as one of its "grand challenges"—is "becoming increasingly plausible." Scientists are learning more and more about the brain, and computers are becoming more and more powerful. So naturally computers will soon be able to mimic the brain's workings. So says Sejnowski.
Horgan then registers his extreme skepticism and ends by pointing out that if you “go back a decade or two—or five or six—and you will find artificial intelligence pioneers like Marvin Minsky and Herbert Simon proclaiming, because of exciting advances in brain and computer science: Artificial brains are coming!”

This brand of technophilic wishful thinking has a sibling in natural language processing. Starting in the middle 1950s the U.S. Department of Defense funded research in machine translation – having a computer translate from one language to another. Early results were encouraging – they always are, no? – hopes were high, promises were made, and money was invested. But later results proved disappointed and the money people in the DoD began to wonder whether their money had been well-used. In the mid-60s a committee was established to assess the field and make recommendations. This committee – the Automatic Language Processing Advisory Committee (my teacher, David Hays, headed up machine translation research at the RAND Corporation and was on the committee) – issued its report in 1966. The ALPAC report (downloadable PDF), as it’s called, said that practical results were not likely in the near term, but that money should be invested in basic research. Congress and the DoD understood the first part of that, but not the second. The money dried up.

Meanwhile the AI boys were still seeing Great Things in the not too distant future. They kept on seeing them, the research got better and better, and by the late 1970s commercial spin-offs were engendering dreams of an AI-driven perpetual money machine. Hotcha! The dreams didn’t work out and the mid 1980s saw the advent of the so-called AI Winter. The commercial money dried up and the field was in disgrace.

Now, according to Horgan, the brain boys are going to take their whacks at the Artificial Supermind Piñata, aka ASP. We all know what an asp is, it’s a poisonous snake. So why do the folks keep chasing the same deadly snake?

No doubt there’s an element of testosterone-fueled hubris here. I believe, however, that there’s something else, something that gives this hubris energy on which to feed: ignorance. We simply don’t know what we’re going. In turn, that means that we don’t know how to evaluate prospects for the future. Every time a new family of techniques is discovered, there’s always going to be new hope that those techniques will solve the problem. Finally.

Here’s an analogy that I find useful in thinking about this business of always seeing The Solution around the next corner. The analogy is based on those old Christmas tree lights that were wired in parallel. When one light went out the whole string went out. To get the lights working again you had to test each bulb individually until you found one that was broken. Then you’d replace it in the string, turn the switch, and – if it was the only defective bulb – the lights would go on. If they didn’t, then you had to keep on looking until you found another broken bulb. And so forth. It was a little tedious, but not difficult.

Think of the problem of building an artificial human mind as consisting of, say, 100,000 little problems. Each of them is a light bulb, they’re all connected into one string, and that string is wired in series. If even one bulb is missing or defective, the string won’t light. In the analogy the string’s initial state is that won’t light. We don’t know what bulb or bulbs is causing the problem or even how many are defective. In fact, a thousand bulbs are defective and they’re randomly scattered along the string. Research, then, is the process of testing each bulb to see whether or not it works. If it doesn’t, you replace it.

So, finding a broken bulb – one in a hundred – is a good thing. It means we’ve got a chance to get things right. Just replace the bulb. When the string doesn’t light . . . Drat, failed again!

There’s more work to do. But the failure doesn’t necessarily mean that the folks who thought they were finally going to bust the piñata were blowing smoke up our collective posterior fundament. Perhaps they were, perhaps they weren’t. Chances are that they’ve really figured something out, and that their techniques are valuable. But, the problem is bigger than they know, than we know. Or can imagine.

Allow me to quote from an article, “The Evolution of Cognition,”* that David Hays and I published two decades ago:
The computer is similarly ambiguous. It is clearly an inanimate machine. Yet we interact with it through language; a medium heretofore restricted to communication with other people. To be sure, computer languages are very restricted, but they are languages. They have words, punctuation marks, and syntactic rules. To learn to program computers we must extend our mechanisms for natural language.

As a consequence it is easy for many people to think of computers as people. Thus Joseph Weizenbaum, with considerable dis-ease and guilt, tells of discovering that his secretary "consults" Eliza—a simple program which mimics the responses of a psychotherapist—as though she were interacting with a real person. Beyond this, there are researchers who think it inevitable that computers will surpass human intelligence and some who think that, at some time, it will be possible for people to achieve a peculiar kind of immortality by "downloading" their minds to a computer. As far as we can tell such speculation has no ground in either current practice or theory. It is projective fantasy, projection made easy, perhaps inevitable, by the ontological ambiguity of the computer. We still do, and forever will, put souls into things we cannot understand, and project onto them our own hostility and sexuality, and so forth.

A game of chess between a computer program and a human master is just as profoundly silly as a race between a horse-drawn stagecoach and a train. But the silliness is hard to see at the time. At the time it seems necessary to establish a purpose for humankind by asserting that we have capacities that it does not. It is truly difficult to give up the notion that one has to add "because . . . " to the assertion "I'm important." But the evolution of technology will eventually invalidate any claim that follows "because." Sooner or later we will create a technology capable of doing what, heretofore, only we could.
Even as computers have so far failed to surpass us in creativity and intelligence – though they can now beat us in chess – they can do amazing and useful things, things that were not predicted 20, 30, or 50 years ago. Whether or not they end up beating us in any and every game we play, that is immaterial.

I do, however, believe Horgan is correct about current prospect. I fear that the current batch of brain boys are mistaken in their hopes for the next decade or two. No doubt we’ll see amazing developments, but I fear we have more to learn about the mind and the brain before we can build one. And by the time we learn enough, I suspect that the question of whether or not we can build one will have become irrelevant.

*William Benzon and David Hays, The Evolution of Cognition.  Journal of Social and Biological Structures 13, 297-320, 1990.

No comments:

Post a Comment