In May of this year John Brockman hosted one of those high-class gab fests he loves so much. This one was one the theme of Possible Minds (from this book). Here's a talk by Rodney Brooks, with comments by various distinguished others, on the theme "The Cul-de-Sac of the Computational Metaphor". Brooks opens:
I’m worried that the crack cocaine of Moore’s law, which has given us more and more computation, has lulled us into thinking that that’s all there is. When you look at Claus Pias’s introduction to the Macy Conferences book, he writes, "The common precondition of the three foundational concepts of cybernetics—switching (Boolean) algebra, information theory and feedback—is digitality." They go straight into digitality in this conference. He says, "We considered Turing’s universal machine as a 'model' for brains, employing Pitts' and McCulloch’s calculus for activity in neural nets." Anyone who has looked at the Pitts and McCulloch papers knows it's a very primitive view of what is happening in neurons. But they adopted Turing’s universal machine.How did Turing come up with Turing computation? In his 1936 paper, he talks about a human computer. Interestingly, he uses the male pronoun, whereas most of them were women. A human computer had a piece of paper, wrote things down, and followed rules—that was his model of computation, which we have come to accept.[Note that Turing came up with his concept of computational process by abstracting over what he observed humans do while calculating. It's an abstracted imitation of a human activity.– B.B.]We’re talking about cybernetics, but in AI, in John McCarthy’s 1955 proposal for the 1956 AI Workshop at Dartmouth, the very first sentence is, "We propose a study of artificial intelligence." He never defines artificial intelligence beyond that first sentence. That’s the first place it’s ever been used. But the second sentence is, "The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." As a materialist reductionist, I agree with that.The second paragraph is, "If a machine can do a job, then an automatic calculator can be programmed to simulate the machine." That’s a jump from any sort of machine to an automatic calculator. And that’s in the air, that’s what we all think. Neuroscience uses computation as a metaphor, and I question whether that’s the right set of metaphors. We know computation is not enough for everything. Classical computation cannot handle quantum information processing.
Note the opposition/distinction between classical computing and quantum information processing: classical|quantum, computing|information processing. Of course quantum computing is all the rage in some quarters as it promises enormous through-put.
Various people interrupt with observations about those initial remarks. Note this one from Stephen Wolfram: "The formalism of quantum mechanics, like the formalism of current classical mechanics, is about real numbers and is not similar to the way computation works."
What's computation? Brooks notes:
Who is familiar with Lakoff and Johnson’s arguments in Metaphors We Live By? They talk about how we think in metaphors, which are based in the physical world in which we operate. That’s how we think and reason. In Turing’s computation, we use metaphors of place, and state, and change of state at place, and that’s the way we think about computation. We think of it as these little places where we put stuff and we move it around. That’s our vision of computation.
One example where, Brooks claims, the computer metaphor doesn't work very well:
Here’s another example: Where did neurons come from? If you go back to very primitive creatures, there was electrical transmission across surfaces of cells, and then some things managed to transmit internally in the axons. If you look at jellyfish, sometimes they have totally separate neural networks of different neurons and completely separate networks for different behaviors.For instance, one of the things that neurons work out well for jellyfish is how to synchronize their swimming. They have a central clock generator, the signal gets distributed on the neurons, but there are different transmission times from the central clock to the different parts of the creature. So, how do they handle that? Well, different species handle it in different ways. Some use amazingly fast propagation. Others, because the spikes attenuate as they go a certain distance, there is a latency, which is inversely proportional to the signal strength. So, the weaker the signal strength, the quicker you operate, and that’s how the whole thing synchronizes.Is information processing the right metaphor there? Or are control theory and resonance and synchronization the right metaphor? We need different metaphors at different times, rather than just computation. Physical intuition that we probably have as we think about computation has served physicists well, until you get to the quantum world. When you get to the quantum world, that physical intuition about stuff and place gets in the way.
A bit later Brooks notes: "A lot of what we do in computation and in physics and in neuroscience is getting stuck in these metaphors."
A bit later Brooks notes:
I pointed out in the note to John [Brockman] about a recent paper titled "Could a Neuroscientist Understand a Microprocessor?" I talked about this many years ago. I speculated that if you applied the ways neuroscientists work on brains, with probes, and look at correlations between signals and applied that to a microprocessor without a model of the microprocessor and how it works, it would be very hard to figure out how it works.There’s a great paper in PLOS last year where they took a 6502 microprocessor that was running Donkey Kong and a few other games and did lesion studies on it, they put probes in. They found the Donkey Kong transistors, which if you lesioned out 98 of the 4,000 transistors, Donkey Kong failed, whereas different games didn’t fail with those same transistors. So, that was localizing Donkey Kong-ness in the 6502.They ran many experiments, similar to those run in neuroscience. Without an underlying model of what was going on internally, it came up with pretty much garbage stuff that no computer scientist thinks relevant to anything. It’s breaking abstraction. That’s why I’m wondering about where we can find new abstractions, not necessarily as different as quantum mechanics or relativity is from normal physics, but are there different ways of thinking that are not extremely mind-breaking that will enable us to do new things in the way that computation and calculus enables us to do new things?
During the ensuing discussion Neil Gershenfeld urges Brooks to think about where we should go. He's reluctant but offers:
This is a mixture of continuous stuff. It’s a wide world of lots of stuff happening simultaneously with local dynamics. When you look at a particular process, and this happens in genetic algorithms as well as in the artificial life field—you talk about a bunch of these in "Cellular Automata"—you see a ratcheting process in which things ratchet up to order from disorder. It's something that looks like mush, but out of it, because of some local rules, comes order. It’s limited order, but then when you put different pieces together, which locally result in little pieces of order, you sometimes get much more order from the coupling of them. What calculus of that could you develop? I’m thinking there may be something around that, a language for explaining how local, tiny pieces of order cross-coupling across different places couple together to get more order.
A bit later, Alison Gopnik:
I want to push against the idea that we’re stuck. In some sense, the very idea of computation itself is an example of a bunch of human beings with human brains overriding earlier sets of intuitions in ways that turned out to be very productive. The intuition that centuries of philosophers and psychologists had was that if you wanted something that was rational or intelligent, it was going to have to have subjective conscious phenomenology the way that people did. That was the whole theory of ideas, historically.Then the great discovery was, wait a minute, this thing that is very subjective and phenomenological that the women computers are doing at Bletchley Park, we could turn that into a physical system. That’s terribly unintuitive, right? That completely goes against all the intuitive dualism that we have a lot of evidence for. But the remarkable thing is that people didn’t just seize up at that point. They didn’t even seize up in the way that you might with quantum mechanics, where they say, okay, this is out there in the world, but we just don’t have any way of dealing with it. People developed new conceptual intuitions and understandings that dealt with it.The question is whether there is something like that out there now that could potentially give us a better metaphor. It’s important to say part of the reason why the computational metaphor was successful was because it was successful. It was incredibly predictive, and for anyone who is trying to do psychology, if you’re trying to characterize what’s going on in the head of this four-year-old, it turns out that thinking about it in computational terms is the most effective way of making good predictions. It’s not a priori the case that you’d have to think about it computationally—you could think about it as a dynamic system, or you could think about it as an analog system—it’s just that if you wanted to predict at a relatively high level what a four-year-old did by thinking of them as an analog system, you’d just fail in a way that you wouldn't fail thinking about it computationally.
A bit later Caroline Jones urges him to elaborate on adaptation, which he'd mentioned in his remarks:
Caroline, on this adaption, I don't have a good way of talking about it yet, so I can’t say how it applies. It’s an important difference. The way we engineer our computational systems is with no adaptation, and the way all biological systems work is through adaptation at every level all the time.
Adaptation is what's going on as the brain matures, no? Neuroplasticity, that's an adaptive process and that's very important for the human mind. Whatever it may be in principle, no human brain is ever capable in fact of "universal computation" or whatever. We specialize our brains for a certain range of tasks, with each of as adopting a different range of specializations.
Still further along, physicist Frank Wilczek notes:
One thing you mentioned, implicitly at least in the discussion of the worms, that seems quite fundamental is the question of openness versus closedness—the systems that have to take information from the world instead of being programmed by somebody. That’s a very fundamental distinction. That is also close to the issue of analog versus digital. The real world has a much more analog aspect and is also much less tractable. So, taking information from the real world and putting it into a machine through learning may lead to structures that are much more complex and intractable than things that are programmed.
Here's a nice little exchange at the very end:
WILCZEK: At the end of his [von Neumann] life he was also working on self-reproducing machines.BROOKS: Right—the 29-state automata for self-reproducing.WILCZEK: You can call it computing, but it’s not really computing.WOLFRAM: They thought at that time that this idea of universal computation was one thing, but then the idea of universal construction will be another thing.WILCZEK: Yes, that’s right.
WOLFRAM: That hasn’t panned out too well.
WILCZEK: Well, maybe it should.
Is this why cognitive science has never manage to coalescence into a coherent discipline around the idea of computation?
No comments:
Post a Comment