It's an old and thorny question that Norbert Hornstein is addressing over at Faculty of Language. He's commenting on a paper by Randy Gallistel (which you can find HERE). The argument deserves full-dress commentary, which is more than I have time for. So I'm going make only a quick remark or two.
Here's a crucial passage:
They [connectionists] have failed to show how such systems [connectionist networks of neurons] could symbolically compute anything at all. As Randy puts it (p. 2), there is no model of how such a brain could even add two numbers together:
"There is, however, a problem with this hypothesis: synaptic conductances are ill suited to function as symbols (Gallistel and King 2010). Anyone who doubts this should ask the first neuroscientist they can corner to explain to them how the brain could write a number into a synapse, or into a set of synapses. Then, step back and watch the hands wave. In the unlikely event of an intelligible answer, ask next how the brain operates on the symbols written into synapses. How, for example, does it add the number encoded in one synapse (or set of synapses) to the number encoded in a different synapse (or set…) to generate yet another synapse (or set…) that encodes the sum?"
The reason is that such “brains” cannot actually manipulate symbols, unlike, say, a classic machine with a Turing architecture (i.e. one with read/write memory and indirect addressing to name two important features).
Nor would I expect a brain to "write a number into a synapse, or into a set of synapses." The remark presupposes symbolic computation as some kind of basic primitive function of the brain, out of which other functions are constructed and are derived.
I take a somewhat different view. As far as I'm concerned, language is the most primitive symbolic operation the brain carries out. That implies that there is no symbolic processing in the brains of neonates, nor is there any symbolic processing in the brains of animals. Whatever is going on, it's not symbolic.
As for adding two numbers together, that's not at all a primitive property of human minds. It's highly derivative. There are, after all, societies that lack robust number systems. If the only numbers you've got are one, two, many, or something like it, you're not going to be doing much arithmetic. So Gallistel's example doesn't get much purchase in my mind.
If adding numbers is what you want to account for, I'm not going to be looking at primitive neural processes. I'm going to be looking at how brains learn to count and thereby associate numerals with objects. And I'm going to be looking at how brains learn primitive symbolic assertions, such as 1+1=2, 1+2=3, 1+3=4, and so forth. And then I'll be looking at how one learns to string together a whole bunch of such simple assertions.
That's a lot of learning and it typically takes children years of practice to get fluent at it. Now, it may be the case that, abstractly considered, arithmetic is a more basic kind of symbol manipulation than is speaking a language. But that's only abstractly. Concretely, in terms of how people function in the real world, things are the reverse. Speaking a language comes naturally, though fluency takes years. Arithmetic calculation is not at all natural. Fluency requires years of tedious focused practice of mind-numbingly simple things.
Well, in the hierarchy of linguistic morphosyntactic evolution, icons precede symbols, so I'd have to wonder if the same goes for neural computation. Might be a lot easier for neural network devices as well.
ReplyDeleteJess Tauber