Friday, May 8, 2015

Dennet’s WRONG: the Mind is NOT Software for the Brain

And he more or less knows it; but he wants to have his cake and eat it too. It’s a little late in the game to be learning new tricks.

I don’t know just when people started casually talking about the brain as a computer and the mind as software, but it’s been going on for a long time. But it’s one thing to use such language in casual conversation. It’s something else to take it as a serious way of investigating mind and brain. Back in the 1950s and 1960s, when computers and digital computing were still new and the territory – both computers and the brain – relatively unexplored, one could reasonably proceed on the assumption that brains are digital computers. But an opposed assumption – that brains cannot possibly be computers – was also plausible.

The second assumption strikes me as being beside the point for those of us who find computational ideas essential to thinking about the mind, for we can proceed without the somewhat stronger assumption that the mind/brain is just a digital computer. It seems to me that the sell-by date on that one is now past.

The major problem is that living neural tissue is quite different from silicon and metal. Silicon and metal passively take on the impress of purposes and processes humans program into them. Neural tissue is a bit trickier. As for Dennett, no one championed the computational mind more vigorously than he did, but now he’s trying to rethink his views, and that’s interesting to watch.

The Living Brain

In 2014 Tecumseh Fitch published an article in which he laid out a computational framework for “cognitive biology” [1]. In that article he pointed out why the software/hardware distinction doesn’t really work for brains (p. 314):
Neurons are living cells – complex self-modifying arrangements of living matter – while silicon transistors are etched and fixed. This means that applying the “software/hardware” distinction to the nervous system is misleading. The fact that neurons change their form, and that such change is at the heart of learning and plasticity, makes the term “neural hardware” particularly inappropriate. The mind is not a program running on the hardware of the brain. The mind is constituted by the ever-changing living tissue of the brain, made up of a class of complex cells, each one different in ways that matter, and that are specialized to process information.
Yes, though I’m just a little antsy about that last phrase – “specialized to process information” – as it suggests that these cells “process” information in the way that clerks process paperwork: moving it around, stamping it, denying it, approving it, amending it, and so forth. But we’ll leave that alone.

One consequence of the fact that the nervous system is made of living tissue is that it is very difficult to undo what has been learned into the detailed micro-structure of this tissue. It’s easy to wipe a hunk of code or data from a digital computer without damaging the hardware, but it’s almost impossible to do the something like that with a mind/brain. How do you remove a person’s knowledge of Chinese history, or their ability to speak Basque, and nothing else, and do so without physical harm? It’s impossible.

Further, as mind/brains learn that knowledge results in specialization that imposes limitations. It is easy to learn a first language, or even two or three if one grows up speaking them in early life. But learning new languages later in life is much more difficult. Belief systems and knowledge systems are similar. Tossing over one’s Christian upbringing in favor of Buddhism, or agnosticism, is not easy. It’s not at all like wiping some sectors of a hard drive and writing new information into them.

Whatever it is that happens to the neurons of one’s brain as one grows, learns, and lives, it is intimately and irreversibly embodied in the detailed microstructure of synaptic connections between billions of neurons. Each of these neurons is a living and active agent. None of them simply bears the passive impress of decisions made by agents external to them in the way that digital computers are the creatures of the humans who use them.

Now think about this in the context of long-term evolution. Long ago the only multi-celled animals were simple creatures of the sea with no more than a handful of neurons to speak of. In aggregate these neurons were sensitive to external conditions – e.g. chemical and light gradients – and capable of guiding the animal along those gradients in life-sustaining ways. That’s how they earned their keep. And their keep was expensive, as neural tissue consumes a long of energy per unit of weight.

Over time more complex creatures evolved, with more complex nervous systems in which many of the neurons were not in touch with the external world. Rather, they just formed connections between sensory neurons and motor neurons. As animals and their nervous systems became more complex, the number of these so-called interneurons proliferated. But all of those interneurons had to pay their way through the services they provided to perception and action.

The evolved architecture of the human brain, with its billions of neurons arranged into thousands of neurofunctional areas, bears the impress of 100s of millions of years of evolution. Each neurofunctional area has some role or roles in the overall neural ecology. There is an architecture there, and it differs from that of the modern digital computer.

Dennett Resists

Dennet has a short reply to Fitch (one of several such replies) in which he asserts that we keep the hardware/software distinction [2].
I have already endorsed the importance of recognizing neurons as “complex self-modifying” agents, but the (ultra-)plasticity of such units can and should be seen as the human brain’s way of having something like the competence of a silicon computer to take on an unlimited variety of temporary cognitive roles, “implementing” the long-division virtual machine, the French-speaking virtual machine, the flying-a-plane virtual machine, the sightreading-Mozart virtual machine and many more. These talents get “installed” by various learning processes that have to deal with the neurons’ semi-autonomous native talents, but once installed, they can structure the dispositions of the whole brain so strongly that they create higher levels of explanation that are both predictive and explanatory.
The first thing that struck me about this passage is the way Dennett talks about the brain as “having something like the competence of a silicon computer to...” It’s almost as if he’s thinking – and here I’m reading a lot into his words – gosh darn it! the computer does it right and the brain only has this kludgy mechanism that we don’t understand. I suppose that’s natural enough if you believe, as Dennett apparently has for most of his career, that the brain really is (some kind of) a digital computer that’s just implemented in living tissue rather than silicon and metal. But the emphasis seems off to me.

Notice how he’s enclosed implementing and installed in scare quotes. He wasn’t doing that in his 2009 Cold Spring paper when he talked of memes as software being installed in the brain [3]. So he doesn’t really mean implementing and installing; he knows people learn the various things he lists in that paragraph.

But for some reason he prefers a locution that makes people’s minds the passive recipient of actions by some external agency. If Dennett were a believing Christian, there’d be an obvious explanation for this way of talking: God’s the agent who’s implementing and installing. But not only is Dennett not a Christian, nor a religious believer of any kind, but he’s an active campaigner for atheism.

The fact is, once Dennett bought into the idea that human brains are digital computers, he also bought into a whole network of words and concepts about external agents – humans – doing things to those computers, which are basically the passive recipients of that external agency. To be sure these complex and subtle devices can run for a relatively long time (hours and even days) semi-autonomously once they’ve been programmed and a task has been activated but that doesn’t make them ultimately autonomous.

Finally, there’s Dennett’s list of virtual machines: long division, speaking French, flying a plane, and sight-reading Mozart. Why call them virtual machines? Why not call them programs? After all, a virtual machine is a kind of program, as is a Fortran compiler, a word-processor, a Windows emulator (for the Macintosh OS) and so forth. And that list – I assume Dennett was simply after a variety of things, sampling the space, so to speak. But if I’m going to talk of virtual machines, where a virtual machine is a kind of program (with unspecified properties), I’d think long division is a program or routine in the arithmetic virtual machine and the Mozart sight-reader is simply an application of the general sight-reader virtual machine.

While “virtual machine” seems like a technical term, Dennett’s not actually differentiating it from other technical terms for talking about software. He doesn’t have a real technical use for it. It just gives his language the appearance of a technical precision that’s not there in the underlying thinking. Technically, Dennett realizes that the hardware/software distinction doesn’t really work for the human mind/brain. But he wants to be able to use the distinction as though it had technical purchase on the world.

Thus while I agree with Dennett that we cannot understand what the brain does by remaining at the neural level, I don’t think having recourse to the notion of software is at all useful. It just papers over the problem when what we need to do is to develop technical concepts compatible with the facts of neural systems.

Dennett’s other response to the realization that neurons are complex living creatures is to talk about “rogue neurons”, as he has on more than one recent occasion [3, 4], he’s tap-dancing and hand-waving to beat the band, and he knows it and has pretty much said as much. Giving full credence to that fact that neurons are complex living creatures undermines Dennett’s beliefs about computers and minds in a fundamental way. But Dennett seems more concerned about containing that realization (within the concept of the rogue neuron) than attempting to revise his interrelated repertoire of views in accordance with that realization.

Thus he tells us, for example: “Nobody knows how to build a basic architecture, a CPU, for a computer where the fundamental elements are a 100 billion individualistic neurons” [4, at roughly 18:50]. But if every element of the computer is an active element, to use the term von Neumann used in his last book, The Computer and the Brain [5], then there’s no need for a central processing unit (CPU). The whole thing is a processor, just as (pretty near) the whole thing is memory (as von Neumann also realized).

As for architecture, why not look at the brain itself? To be sure, much about its operation is obscure, to say the least, but one might look to it for architectural cues. That’s what David Hays and I did some years ago in “The Principles and Development of Natural Intelligence” [6], where we argued for five broad computational principles, and placed those principles into correspondence with vertebrate phylogeny and human ontogeny. But for Dennett to entertain a comparable set of ideas, well, that’s going to require more than desperate expostulations about rogue neurons.

References

[1] W. Tecumseh Fitch, Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition, Physics of Life Reviews 11 (2014) 329–364. http://dx.doi.org/10.1016/j.plrev.2014.04.005

[2] Dennett DC. The software/wetware distinction. Comment on “Toward a computational framework for cognitive biology: unifying approaches from cognitive neuroscience and comparative cognition” by W. Tecumseh Fitch. Phys Life Rev 2014;11:367–8.

[3] The Cultural Evolution of Words and Other Thinking Tools (Cold Spring Harbor Symposia on Quantitative Biology, Volume LXXIV, pp. 1-7, 2009).

[3] The Normal Well-Tempered Mind, 2013, talk at Edge: https://edge.org/conversation/the-normal-well-tempered-mind

[4] If brains are computers, what kind of computers are they? - Daniel Dennett keynote at PT-AI 2013 – Philosophy and Theory of Artificial Intelligence, October 2013. https://www.youtube.com/watch?v=OlRHd-r2LOw

[5] John von Neumann. The computer and the brain. Yale University Press New Haven, CT: 1958.

[6] William Benzon and David G. Hays. Principles and Development of Natural Intelligence. Journal of Social and Biological Structures 11, 293 - 322, 1988.

No comments:

Post a Comment