Pages in this blog

Monday, May 27, 2013

Watch Out, Dan Dennett, Your Mind’s Changing Up on You!

I want to look at two recent pieces by Daniel Dennett. One is a formal paper from 2009, The Cultural Evolution of Words and Other Thinking Tools (Cold Spring Harbor Symposia on Quantitative Biology, Volume LXXIV, pp. 1-7, 2009). The other is an informal interview from January of 2013, The Normal Well-Tempered Mind. What interests me is how Dennett thinks about computation in these two pieces.

In the first piece Dennett seems to be using the standard-issue computational model/metaphor that he’s been using for decades, as have others. This is the notion of a so-called von Neumann machine with a single processor and a multilayer top-down software architecture. In the second and more recent piece Dennett begins by asserting that, no, that’s not how the brain works, I was wrong. At the very end I suggest that the idea of the homuncular meme may have served Dennett as a bridge from the older to the more recent conception.

Words, Applets, and the Digital Computer

As everyone knows, Richard Dawkins coined the term “meme” as the cultural analogue to the biological gene, or alternatively, a virus. Dennett has been one of the most enthusiastic academic proponents of this idea. In his 2009 Cold Spring Harbor piece Dennett concentrates his attention on words as memes, perhaps the most important class of memes. Midway through the paper tells us that “Words are not just like software viruses; they are software viruses, a fact that emerges quite uncontroversially once we adjust our understanding of computation and software.”

Those first two phrases, before the comma, assert a strong identification between words and software viruses. They are the same (kind of) thing. Then Dennett backs off. They are the same, providing of course, that “we adjust our understanding of computation and software.” Just how much adjusting is Dennett going to ask us to do?
This is made easier for our imaginations by the recent development of Java, the software language that can “run on any platform” and hence has moved to something like fixation in the ecology of the Internet. The intelligent composer of Java applets (small programs that are downloaded and run on individual computers attached to the Internet) does not need to know the hardware or operating system (Mac, PC, Linux, . . .) of the host computer because each computer downloads a Java Virtual Machine (JVM), designed to translate automatically between Java and the hardware, whatever it is.
The “platform” on which words “run” is, of course, the human brain, about which Dennett says nothing beyond asserting that it is there (a bit later). If you have some problems about the resemblance between brains and digital computers, Dennett is not going to say anything that will help you. What he does say, however, is interesting.

Notice that he refers to “the intelligent composer of Java applets.” That is, the programmer who writes those applets. Dennett knows, and will assert later on, that words are not “composed” in that way. They just happen in the normal course of language use in a community. In that respect, words are quite different from Java applets. Words ARE NOT explicitly designed; Java applets ARE. Those Java applets seem to have replaced computer viruses in Dennett’s exposition, for he never again refers to them, though they figured emphatically in the topic sentence of this paragraph.
The JVM is “transparent” (users seldom if ever encounter it or even suspect its existence), automatically revised as needed, and (relatively) safe; it will not permit rogue software variants to commandeer your computer.
Computer viruses, depending on their purpose, may also be “transparent” to users, but, unlike Java applets, they may also commandeer your computer. And that’s not nice. Earlier Dennett had said:
Our paradigmatic memes, words, would seem to be mutualists par excellence, because language is so obviously useful, but we can bear in mind the possibility that some words may, for one reason or another, flourish despite their deleterious effects on this utility.
Perhaps that’s one reason Dennett abandoned his talk of computer viruses in favor of those generally helpful Java applets.

But then why would he have talked about computer viruses in the first place? Simple: tradition. Memetics talk has long used the notion of a (cultural) virus either as an alternative to or in alternation with the notion of a (cultural) gene; and memetics has talked of computer viruses for some time now. It’s a useful analogy.

Now that he has banished computer viruses, and their nasty effects, from our minds, Dennett then goes on to assert:
Similarly, when you acquire a language, you install, without realizing it, a Virtual Machine that enables others to send you not just data, but other virtual machines, without their needing to know anything about how your brain works.
Language acquisition and learning has now become a matter of installation. That, it seems to me, marks quite a difference between computer technology and natural language. Software installation is a relatively quick and straightforward process and is something that a human agent does to a computer. Language learning takes place over a decade or so and is primarily self-directed but with external assistance by others. Similarly, it is easy to uninstall a piece of software. But how would you “uninstall” someone’s knowledge of a language? One might well do so by destroying a large part of their brain, but that would likely destroy much else as well. Knowledge and skills reside in brains in a way that is much different from how software exists in computers.

Dennett of course knows that language learning is not the same as software installation and talks about it later in the paper. But he firsts re-establishes the parallel between words and software:
Words are not just sounds or shapes. As Jackendoff (2002) demonstrates, they are autonomous, semi-independent informational structures, with multiple roles in cognition. They are, in other words, software structures, like Java applets.
Dennett then goes on to distinguish between intelligent design (of Java applets) and blind evolution (of language) and he tells us something about how words are installed:
Unlike Java applets, they are designed by blind evolution, not intelligent designers, and they get installed by repetition, either by deliberate rehearsal or via several chance encounters. The first time a child hears a new word, it may scarcely register at all, attracting no attention and provoking no rehearsal; the second time the child hears the word, it may be consciously recognized as somewhat familiar or it may not, and in either case, its perception will begin laying down information about context, about pronunciation, and even about meaning.
The paragraph goes on to say a bit more about language acquisition and to note, following Terrence Deacon, that the structure of the brain must have co-evolved with the emergence of language.

I want to dwell on these two differences:
1) Intelligent and deliberate design of software vs. “design” of language through blind evolution.

2) Human installation of software in a computer as a relatively quick ‘one-shot’ process vs. language acquisition and learning by a child over a period of years.
These are two aspects of the same thing, the fact that computer systems, software and hardware, are designed by humans who have a comprehensive and external overview of those systems while language evolves in humans who do not have a comprehensive and external overview of it.

External Designers vs. Neurons as Agents

As my friend and colleague Tim Perper put it years ago, the trouble with the analogy between computers and brains is that computers are designed and programmed by humans but brains are not. While I agreed with Tim–that observation, after all, has been around for some time–I also thought to myself that that difference was somehow external to the comparison. And for some purposes perhaps it is; I know I’ve certainly found it useful to think of the mind as being computational in some important way.

But more and more I find myself agreeing with Tim, that we can’t simply ignore that difference. It is fundamental. It is not so much about the general idea of computing, but about a specific style of computing.

Dennett, of course, does not deny a difference between brains and computers. It’s there in the care he takes to differentiate the intelligent design of Java applets from the evolutionary design of words. But he does no more than state the difference. He makes no attempt to probe it nor does he suggest that it might undercut the force of his analogy, between words and applets.

Now, let’s take a look at his recent interview, The Normal Well-Tempered Mind. The interview is informal unlike his Cold Spring Harbor article. Dennett opens with a moderately dramatic statement:
I'm trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine.
So, he made a mistake years ago–near the beginning of his career I would think–and he’s trying to think himself out from under it. Good enough. He goes on to say:
The idea is basically right, but when I first conceived of it, I made a big mistake. I was at that point enamored of the McCulloch-Pitts logical neuron. McCulloch and Pitts had put together the idea of a very simple artificial neuron, a computational neuron, which had multiple inputs and a single branching output and a threshold for firing, and the inputs were either inhibitory or excitatory. They proved that in principle a neural net made of these logical neurons could compute anything you wanted to compute.
McCulloch and Pitts and did this work in the 1940s and it was common lore, at least in some circles, by the time Dennett went to college and graduate school (you can find the important papers in Warren S. McCulloch, Embodiements of Mind 1965). Dennett goes on to say:
So this was very exciting. It meant that basically you could treat the brain as a computer and treat the neuron as a sort of basic switching element in the computer, and that was certainly an inspiring over-simplification. Everybody knew is was an over-simplification, but people didn't realize how much, and more recently it's become clear to me that it's a dramatic over-simplification, because each neuron, far from being a simple logical switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.
The computers on which one can install a Java Virtual Machine, and for which one can write a variety of Java applets, ARE constituted of millions of logical switches, realized in silicon. These circuits, of course, are designed by engineers, as are the layers of software that runs in those circuits.

But the neurons that constitute human brains ARE NOT simple switches. They are, as Dennett now emphasizes, more or less autonomous agents. When language is being acquired, it is being acquired by billions of such agents, connected together in a brain, pursuing their various individual agendas even as the “owner” of that brain is living her life.

Dennett goes on to ask:
The question is, what happens to your ideas about computational architecture when you think of individual neurons not as dutiful slaves or as simple machines but as agents that have to be kept in line and that have to be properly rewarded and that can form coalitions and cabals and organizations and alliances? This vision of the brain as a sort of social arena of politically warring forces seems like sort of an amusing fantasy at first, but is now becoming something that I take more and more seriously, and it's fed by a lot of different currents.
And then:
First, you try to make minds as simple as possible. You make them as much like digital computers, as much like von Neumann machines, as possible. It doesn't work. Now, we know why it doesn't work pretty well. So you're going to have a parallel architecture because, after all, the brain is obviously massively parallel.

It's going to be a connectionist network. Although we know many of the talents of connectionist networks, how do you knit them together into one big fabric that can do all the things minds do? Who's in charge? What kind of control system? Control is the real key, and you begin to realize that control in brains is very different from control in computers. Control in your commercial computer is very much a carefully designed top-down thing.
The topmost level of control over digital computers is external to the machines themselves. It is the prerogative on the humans who design, build, program, and operate the machines. In contrast, human brains and human language have evolved without such external control. They have somehow arisen in populations of interacting neurons, each of them active agents.

Dennett goes on from there to talk about culture as providing “opportunities that don't exist for any other brain tissues in any other creatures, and that this exploration of this space of cultural possibility is what we need to do to explain how the mind works.” And so:
My next major project will be trying to take another hard look at cultural evolution and look at the different views of it and see if I can achieve a sort of bird's eye view and establish what role, if any, is there for memes or something like memes and what are the other forces that are operating. We are going to have to have a proper scientific perspective on cultural change.
It’s a worthy project. Now that he’s abandoned the idea of neurons as simple switches and has started to think about them as active agents, will he abandon the metaphorical and analogical use of computing technology based on such simple switches?

From Switches Through Memes to Neural Agents?

What I find particularly interesting and curious in Dennett’s thinking as it is exhibited in these two pieces is the relative absence of talk about memes as agents. For he has certainly talked in that way and even endowed those memetic agents with the power to inculcate irrational beliefs in otherwise rational human beings.* What I’m wondering is if, in effect, such agency has become detached in Dennett’s mind from the concept of memes flitting about from one brain to another and has now become lodged in those complex neurons “that can form coalitions and cabals and organizations and alliances.” I note, in parting, that the same McCulloch who gave us the neuron-as-logical switch also gave us a concept of competitive control in which the ultimate control of the brain is lodged, not at the top, but at the bottom, in the reticular activating system.

* * * * *

If you want to play around with the idea of the brain as consisting of millions of quasi-autonomous agents, see my post, The Busy Bee Brain. For a brief and informal introduction to McCulloch’s model of the reticular activating system, see Mode & Behavior 2: McCulloch’s Model.

*You'll find Dennett in full meme-as-homuncular-agents mode in this video from 2007 for a sophisticated popular audience at a TED conference:


No comments:

Post a Comment