First, I talk about the human metaphor at the heart of AI: How do we get beyond it, around it? Then I talk about the difference between a real phenomenon and a computer simulation of it. I conclude by extending that discussion to thoughts about embodiment.
For AI to blossom we need to continue learning from human behavior while at the same time abandoning the human metaphor.
Turing proposed the imitation game as a way of posing the question of artificial intelligence – is the machine really thinking? – without having to explicitly define the mechanisms and processes of intelligence. Ever since then AI has been using the so-called “Turing Test” as a way to measure progress and define its end goal. As a practical matter, only a relatively small set of researchers are setting out to achieve “human level” intelligence in a machine; most are content with practical success of some kind. But the field is haunted by this larger goal, and so is the general public.
Why is it such a potent dream? I suspect because it’s easy to conceptualize, in a sense. We know what humans are, we know what they do. We can think about that.
But, we don’t know how they do it. That is, we don’t understand humans in the terms we use to work with digital computers. In that sense, then, the objective of producing human level artificial intelligence is empty.
So we have a paradox, a goal that is meaningful in one sense, but utterly empty in another.
Meanwhile we now have AI systems that can beat any human at chess and we have no particular reason to think they are doing so in human-mimetic terms. We have other AI systems that can predict the folding patterns of proteins, which is something for which there is no natural counterpart. Humans, that is natural human intelligence, has never been able to do that. These intelligences are thus deeply artificial. Not only is their substrate silicon and metal rather than neurons and biochemical, but their actions, their strategies and tactics, are artificial as well.
How can we use the success of these deeply artificial systems to wean us from the chimerical pursuit of artificial human beings? Yes, we have much to learn from the study of how humans perceive, think, move, and act, but that should not distract us from the need to understand artificial systems on their ‘native’ terms. How do we apply our study of the human to our investigations of the artificial? And how do we take it in the other direction as well?
How do we move back and forth between the natural and the artificial (in the domain of intelligence) while at the same time being clear about the difference?
Simulation and the real
The brain is a dynamical system, not a computational one, no more than an atomic reaction is a computational system. The difference between a turbulent fluid flow, say, in a stream, and simulation of one is clear. The difference between neurodynamics and a simulation of neurodynamics is just as real, but seems harder to see. Why?
Both simulations involve symbols running on digital computers. But there are no symbols in the real processes. Perhaps it’s because we know that digital computers pass signals all over the place and it is easy to think of neural inputs and outputs as signals, which they are. But those signals are not governed by symbols, as are the signals in digital computers.
In contrast, it is not so easy to think of the atoms and molecules in a stream signals. They aren’t. They’re governed by physical laws, nothing more. Of course, the signals in digital computers are governed by physical law as well. But the organization of those signals is governed by, shall we say, symbolic considerations. Those symbolic considerations exist in the minds of the people who design and use digital computers. No such designers/users exist for streams.
as Embodiment and neural dynamics
To understand the nervous system as a complex dynamical system is to understand it as continuous with, embedded in, the physical world. It is not a symbol system; it is a physical system.
Yet, if any system is symbolic, it is language. And language is implemented in nervous systems. Language exists in a social system. It is a vehicle for communication between individual human beings. And we cannot understand social systems, the interactions between human beings, in purely physical terms. Social systems may well exhibit complex dynamics – I think they do, see, e.g. de Vany on Hollywood Economics – but we cannot understand those dynamics by attempting to reduce them to physical dynamics. The atoms of physical dynamics are, well, physical atoms. The atoms of social dynamics are individual human beings.
It is language that mediates between these two dynamical realms. That is what we mean when we talk of language and cognition as being embodied.
No comments:
Post a Comment