I'm bumping this post from September 2014 to the top because it's my oldest post on this topic.
In my recent post arguing that “superintelligent” computers are somewhere between very unlikely to impossible, I asserted: “This hypothetical device has to acquire and construct its superknowledge ‘from the inside’ since no one is going to program it into superintelligence ...” Just what does that mean: from the inside?
The only case of an intelligent mind that we know of is the human mind, and the human mind is built from the “inside.” It isn’t programmed by external agents. To be sure, we sometime refer to people as being programmed to do this or that, and when we do so the implication is that the “programming” is somehow against the person’s best interests, that the behavior is in some way imposed on them.
And that, of course, is how computers are programmed. They are designed to be imposed upon by programmers. A programmer will survey the application domain, build a conceptual model of it, express that conceptual model in some design formalism, formulate computational processes in that formalism, and then produce code that implements those processes. To do this, of course, the programmer must also know something about how the computer works since it’s the computer’s operations that dictate the language in which the process design must be encoded.
To be a bit philosophical about this, the computer programmer has a “transcendental” relationship with the computer and the application domain. The programmer is outside and “above” both, surveying and commanding them from on high. All too frequently, this transcendence is flawed, the programmer’s knowledge of both domain and computer is faulty, and the resulting software is less than wonderful.
Things are a bit different with machine learning. Let us say that one uses a neural net to recognize speech sounds or recognize faces. The computer must be provided with a front end that transduces visual or sonic energy and presents the computer with some low-level representation of the sensory signal. The computer then undertakes a learning routine of some kind the result of which is a bunch of weightings on features in the net. Those weightings determine how the computer will classify inputs, whether mapping speech sounds to letters or faces to identifiers.
Now, it is possible to examine those feature weightings, but for the most part they will be opaque to human inspection. There won’t be any obvious relationship between those weightings and the inputs and outputs of the program. They aren’t meaningful to the “outside.” They make sense only from the “inside.” The programmer no longer has transcendental knowledge of the inner operations of the program that he or she built.
If we want a computer to hold vast intellectual resources at its command, it’s going to have to learn them, and learn them from the inside, just like we do. And we’re not going to know, in detail, how it does it, any more than we know, in detail, what goes on in one another’s minds.
How do we do it? It starts in utero. When neurons first differentiate they are, of course, living cells and further differentiation is determined in part by the neurons themselves. Each neuron “seeks” nutrients and generates outputs to that end. When we analyze neural activity we tend to treat it, and its activities, as components of a complicated circuit in service of the whole organism. But that’s not how neurons “see” the world. Each neuron is just trying to survive.
Think of ants in a colony or bees in a swarm. There may be some mysterious coherence to the whole, but that coherence is the result of each individual pursuing its own purposes, however limited those purposes may be. So it is with brains and neurons.
The nervous system develops in a highly constrained environment in utero, but it is still a living and active system. And the prenatal auditory system can hear and respond to sounds from the external world. When the infant is born its world changes dramatically. But the brain is sill learning and acting “from the inside.”
The structure of the brain is, of course, the result of millions of years of evolutionary history. The brain has been “designed” by evolution to operate in a certain world. It is not designed and built as a general purpose device, but yet becomes capable of many things, including designing and building general purpose computational devices.
But if we want those devices to be capable in an “intelligent” way we’re going to have to let them learn their way about in the world. We can design a machine to learn and provide it with an environment in which it can learn, an environment that most likely will entail interacting with us, but just what it will learn and how it will learn it, that’s taking place inside the machine outside of our purview. The details of that knowledge are not going to be transparent to external inspection.
We can imagine a machine that picks up a great deal of knowledge by reading books and articles. But that alone is not sufficient for deep knowledge of any domain. No human ever gained deep knowledge merely through reading. One must interact with the world through building things, talking with others, conducting experiments, and so forth. It may, in fact, have to be a highly capable robot, or at least have robotic appendages, so that it can move about in the world. I don’t see how our would-be intelligent computer can avoid doing this.
Just how much could a computer learn in this fashion? We don’t know. If, say, two different computers learned about more or less the same world in this fashion, would they be able to exchange knowledge simply by direct sharing of internal states? That’s a very interesting question, one for which I do not have an answer. I have some notes suggesting “why we'll never be able to build technology for Direct Brain-to-Brain Communication,” but that is a somewhat different situation since we didn’t design and construct our brains and they weren’t built for direct brain-to-brain communication. Perhaps things will go differently with computers.
By and large, we don’t know what future computing will bring. A computer with facilities roughly comparable to the computer in Star Trek’s Enterprise would be a marvelous thing to have. It wouldn’t be superintelligent, but its performance would, nonetheless, amaze us.
Before a robot starts building things, perhaps it should start taking things apart. That seems to be the basis of what happened in human evolution. Phonosemantic iconicity is mapped to the way that our oral cavity processes food morsels, graded over the different tooth types by material texture (the incisors are nippers, the canines piercers, the bicuspids shears, and the molars crushers/crackers). Each tooth type as you move further back generally processes materials that increase the numbers of dimensions involved in their textural cohesion- so nippers (as when you pluck out a stem from an apple). Although many vertebrates do make primitive constructions (nests, for example), none are able to make complex ones involving many distinctive parts, and their construction strategies only involve repetition/iteration of the same simple step, unlike humans.
ReplyDelete