I’ve been thinking about those philosophical arguments I find so useless and have an idea about what they might be up to. Let’s look at this statement early in a classic paper by Wilfrid Sellars, Philosophy and the Scientific Image of Man (1963):
For the philosopher is confronted not by one complex many-dimensional picture, the unity of which, such as it is, he must come to appreciate; but by two pictures of essentially the same order of complexity, each of which purports to be a complete picture of man-in-the-world, and which, after separate scrutiny, he must fuse into one vision. Let me refer to these two perspectives, respectively, as the manifest and the scientific images of man-in-the-world.
Those philosophical arguments that I find so unsatisfying are about the manifest image while the investigations in neuroscience, cognitive science, and artificial intelligence are articulated within the scientific image. Thus we find that the formulations and arguments that interest me are always couched in the specialized language of some intellectual discipline while the philosophical arguments are relatively free of specialized terminology, though they may often be abstract and difficult to grasp – I’m thinking particularly of Searle’s Chinese Room argument.
I'm thinking in one world. The philosophers are in a different one. The connection between the two is obscure.
The problem is an ontological one: What kind of a thing is a computer (at its most sophisticated)? Is it a machine or is it human? In some ways it’s like a machine: they’re non-living, made of inorganic materials, and don’t look at all like animals or humans. But in some ways they’re like a human: they do things that only humans have heretofore done, calculation, sorting, tabulation, and so forth; and we interact with them through language. To be sure, computer languages are rigid in a way that natural languages aren’t, but still, they are languages. They call on our language facilities. So the philosophers are trying to fit computers into a system of ontological categories that has no place for them.
Why not just recognize computers as a separate category of thing, neither human nor mechanical, but something else, something equally fundamental and hence irreducible?
Like an iron horse
We’ve been there before. Consider this passage from Thoreau’s “Sound” chapter in Walden (1854):
When I meet the engine with its train of cars moving off with planetary motion ... with its steam cloud like a banner streaming behind in gold and silver wreaths ... as if this traveling demigod, this cloud-compeller, would ere long take the sunset sky for the livery of his train; when I hear the iron horse make the hills echo with his snort like thunder, shaking the earth with his feet, and breathing fire and smoke from his nostrils, (what kind of winged horse or fiery dragon they will put into the new Mythology I don’t know), it seems as if the earth had got a race now worthy to inhabit it.
There’s quite a bit of figurative language in that paragraph, but I’m interested in two metaphors, iron horse and fiery dragon. The iron horse is a well-known metaphor for a steam locomotive, perhaps from all those old Westerns where Indians use the term. Fiery dragon is not so common, but it’s use in that context is perfectly intelligible.
What was Thoreau doing when he used those figures? He certainly recognized them as figures. He knew that the thing about which he was talking was some glorified mechanical contraption. He knew it was neither horse nor dragon, nor was it living.
Or was it? Did he really know that it wasn’t alive? Or did he think slash fear that it might be a new kind of life? We live in a world where everyone is familiar with cars and trains and airplanes from an early age, not to mention all sort of smaller self-propelling devices. We find nothing strange about such things. They’ve always been part of our world. And so, as we learned to talk, as we learned to think, we made places for these things in our worldview, along with rocks, a dandelions, raccoons, the wind, and other humans.
But Thoreau and his fellows did not grow up in such a world. They grew up, and learned to think about, a world in which things which moved across the surface of the earth did so either under animal power or human power. When steam locomotives first appeared, even primitive ones, that was the first time in history that people saw inanimate beings, mere collocations of things, move over the surface of the earth under their own power.
So where would they fit into the conceptual system? With other mechanical devices, like pumps, and stationary engines, or with mobile animals and humans? They had properties of each. In physical substance they were like the mechanical devices. But in what they did, they were like animals and humans. Fact is, they didn’t fit the conceptual system. Maybe they WERE a new form of life.
Well, they weren’t and they aren’t. But they did pose conceptual problems. So I suspect that, when Thoreau used those figures, iron horse and fiery dragon, he used them to capture the in-betweenness of the steam locomotive, the fact that their nature seemed to belong between the cracks in the category system.
And so our imaginative life is populated with things that are neither human nor machine, but somehow both: cyborgs, androids, “artificial beings” (I’m thinking of Tezuka Osamu’s Michi in Metropolis), and so forth.
Trillions of moving parts
As a telling, if minor example of this ontological confusion, consider a passage from an Dan Dennett video from 2013, The Normal Well-Tempered Mind:
What Turing gave us for the first time (and without Turing you just couldn’t do any of this) is a way of thinking in a disciplined way about phenomena that have, as I like to say, trillions of moving parts. Until late 20th century, nobody knew how to take seriously a machine with a trillion moving parts. It’s just mindboggling.
There are other places where Dennett expresses the same idea but talks instead of “building blocks” rather than “moving parts”. “Building blocks” is certainly the more accurate phrase because the units Dennett is talking about don’t move. They aren’t mechanical things like gears, levers, pulleys, and wheels.
They’re physically static elements in electronic circuits. But they are active. They do things. They channel the motions of electrons. But electrons are not very tiny billiard balls moving through very small pipes. They’re a different kind of creature.
I thus suspect that Dennett himself is still thinking in a world where we have machines, of the familiar kind with moving parts, on the one hand, and humans (and perhaps animals) on the other, biological beings constructed of living molecules. And that world computers must be one or the other. So Dennett thinks of computers as machines, and, as human minds are computers, they must machines as well. Ergo, we are machines, meat machines.
He hasn’t figured out that computers are something else, but fundamental. Not machines, not biological beings either. And minds, they’re not machines either. And, though they are organic, they’re different in some fundamental way from hearts, muscles, livers, and so on.
It’s a strange new conceptual world we find ourselves in, isn’t it? Once again, a brave new world has made us strangers in a strange land.
No comments:
Post a Comment