Pages in this blog

Monday, August 5, 2013

Turtles All the Way Down: How Dennett Thinks

An Essay in Cognitive Rhetoric

I want to step back from the main thread of discussion and look at something else: the discussion itself. Or, at any rate, at Dennett’s side of the argument. I’m interested in how he thinks and, by extension, in how conventional meme theorists think.

And so we must ask: Just how does thinking work, anyhow? What is the language of thought? Complicated matters indeed. For better or worse, I’m going to have to make it quick and dirty.

Embodied Cognition

In one approach the mind’s basic idiom is some form of logical calculus, so-called mentalese. While some aspects of thought may be like that, I do not think it is basic. I favor a view called embodied cognition:
Cognition is embodied when it is deeply dependent upon features of the physical body of an agent, that is, when aspects of the agent's body beyond the brain play a significant causal or physically constitutive role in cognitive processing.

In general, dominant views in the philosophy of mind and cognitive science have considered the body as peripheral to understanding the nature of mind and cognition. Proponents of embodied cognitive science view this as a serious mistake. Sometimes the nature of the dependence of cognition on the body is quite unexpected, and suggests new ways of conceptualizing and exploring the mechanics of cognitive processing.
One aspect of cognition is that we think in image schemas, simple prelinguistic structures of experience. One such image schema is that of a container: Things can be in a container, or outside a container; something can move from one container to another; it is even possible for one container to contain another.

Memes in Containers

The container scheme seems fundamental to Dennett’s thought about cultural evolution. He sees memes as little things that are contained in a larger thing, the brain; and these little things, these memes, move from one brain to another.

This much is evident on the most superficial reading of what he says, e.g. “such classic memes as songs, poems and recipes depended on their winning the competition for residence in human brains” (from New Replicators, The). While the notion of residence may be somewhat metaphorical, the locating of memes IN brains is not; it is literal.

What I’m suggesting is that this containment is more than just a contingent fact about memes. That would suggest that Dennett has, on the one hand, arrived at some concept of memes and, on the other hand, observed that those memes just happen to exist in brains. Yes, somewhere Over There we have this notion of memes as the genetic element of culture; that’s what memes do. But Dennett didn’t first examine cultural process to see how they work. As I will argue below, like Dawkins he adopted the notion by analogy with biology and, along with it, the physical relationship between genes and organisms. The container schema is thus foundational to the meme concept and dictates Dennett’s treatment of examples.

The rather different conception of memes that I have been arguing in these notes is simply unthinkable in those terms. If memes are (culturally active) properties of objects and processes in the external world, then they simply cannot be contained in brains. A thought process based on the container schema cannot deal with memes as I have been conceiving them.

Homunculi in Homunculi

But that’s hardly Dennett’s only use of the container schema. The container schema is central to his thought, and that is what interests me. I want to plot a quick and crude chart of the course of that image in his thinking.

At the beginning of his recent interview, The Normal Well-Tempered Mind, Dennett asserts:
I'm trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine. This is called homuncular functionalism, because you take the whole person. You break the whole person down into two or three or four or seven sub persons that are basically agents. They're homunculi, and this looks like a regress, but it's only a finite regress, because you take each of those in turn and you break it down into a group of stupider, more specialized homunculi, and you keep going until you arrive at parts that you can replace with a machine, and that's a great way of thinking about cognitive science.
So, in this homuncular functionalism that he adopted early in his career, the whole person, that is the MIND of a whole person, contains “two or three or four or seven sub persons [or] agents” and each of those in turn contains other agents and so on until we reach individual neurons which Dennett, following McCulloch and Pitts, conceived of as being simple logical switches.

Now we have a cascade of containers within containers. That’s the container schema (recursively) applied to itself.

That is not all that’s going on here. Rather, the cascade of containers is being used to solve another problem: how do we conceive of a mind as being somehow enacted by inanimate physical things? That is, he’s proposing a solution to the mind-body problem, which is a kissing cousin to the Great Chain of Being (cf. George Lakoff and Mark Turner, More than Cool Reason, pp. 170 ff).

At the bottom of the Great Chain we have inanimate matter. We find plants one level up. Go up another level and we’ve got animals. And then humans are above that. So Dennett is using a cascade of homuncular containers to bridge the ontological gap between humans and inanimate matter.

How can that possibly work?

Flim-flammery.

Though I’ve not read Dennett’s full-dress technical account of homuncular functionalism, just what he says in this interview, I’d be rather surprised if he specified how many levels the cascade has from top to bottom, where the top homunculus encompasses the whole brain and the bottom homunculi are individual neurons. The moment you say it’s 10, or 15, or seven levels, whatever, the question arises: Why 10, or 15, or 7? Does it matter? How? You don’t want to be asking such questions. They’re too, well, empirical, too close to thinking about actual mechanisms. So you keep it vague.

That’s how Dennett bridges the gap between human minds and (relatively) simple physical computers. He places a cascade of homunculi between the top and the bottom of the Great Chain thereby allowing him to reduce the top element, humans, to a complex collections of elements at the bottom, inanimate logical switches.

Genes in Containers

Then Dennett turned his attention to biology (e.g. Darwin’s Dangerous Idea, 1995, which, btw I’ve not read) where he gets to talk about genes and biological evolution. Genes, of course, are literally little things contained within bigger things, cells and then multi-celled organisms. Genes aren’t particularly simple; the molecules that embody them are in fact quite complex. But they are definite physical things. We can observe them in various ways.

That is to say, we don’t need to posit some vague homuncular cascade between genes at the bottom and whole organisms at the top. There is a definite physical structure there and we can study that structure and the processes involving it. While we’re far from understanding how it all works, there’s little doubt that it is a physical process, albeit with a dollop of “information” to help it along.

Now we’re getting somewhere.

Dennett’s next step is simply to adopt Dawkins’s notion of the meme as some mental primitive, one more interesting than those McCulloch-Pitts logical neurons. As the gene is to the whole (multi-celled) organism, so is the meme to the whole mind. If we can dispense with the homuncular cascade in studying the relation between genes and organism, then, by analogy, we can dispense with it in studying the relation between memes and the mind. But we still have the notion of the mind as a container of little things, only now those little things are memes rather than neurons.

Yeah, it’s crude, but it seems to me that that’s the underlying logic of Dennett’s thought.

Now, things get interesting and, at the same time, they fall apart. As a rhetorical convenience Dawkins talks of genes as though they are agents and other biologists talk that way as well. It works, it’s facilitates thought. And ... it can be dropped in favor more complex constructions if necessary, constructions that do not conceive of genes as agents.

Let’s do the same with memes. Now Dennett has his memes as agents, flitting from mind to mind, and commandeering mental/neural resources. The problem here, of course, is that Dennett has no way of dropping the agent-language in favor of a more sophisticated view. He doesn’t really have no more sophisticated view.

Except by way of analogy with computer viruses or apps. We’re right back where he began, with logical switches. And those switches are ultimately put through their paces by human programmers. We know how those programmers work, don’t we? A cascade of nested homunculi, one that, further more, evolved over millions and millions of years of natural selection.

It sorta’ works. But only sorta’. The duct tape, bailing wire, and chewing gum stick out all over the place.

Those Homuncular Neurons and Their Coalitions and Cabals

So, quickly and crudely, we have Dennett’s use of the notion of itty bitty things being contained in some big thing:
  1. homuncular functionalism, itty bitty things in the big big mind
  2. genes, itty bitty things from which big big organisms are constructed
  3. memes, itty bitty things from which all of mind and culture are constructed
Where’s he at now? He’s realized the error of his early ways and has decided that it will not do to think of neurons as simple logical switches. Rather, returning to the (not so) well-tempered mind:
The question is, what happens to your ideas about computational architecture when you think of individual neurons not as dutiful slaves or as simple machines but as agents that have to be kept in line and that have to be properly rewarded and that can form coalitions and cabals and organizations and alliances? This vision of the brain as a sort of social arena of politically warring forces seems like sort of an amusing fantasy at first, but is now becoming something that I take more and more seriously, and it's fed by a lot of different currents.
Now, all of a sudden, those neurons are no longer logical switches. They’re agents. Homuncular agents. It’s as though Dennett has taken the TOP of the Great Chain, a human being, and deposited it near the BOTTOM, in single neurons. Now we have a snake-like cascade that’s swallowing its own tail.

This is metaphorical and Dennett surely knows that. Still, he can’t shake the metaphors:
The idea of selfish neurons has already been articulated by Sebastian Seung of MIT in a brilliant keynote lecture he gave at Society for Neuroscience in San Diego a few years ago. I thought, oh, yeah, selfish neurons, selfish synapses. Cool. Let's push that and see where it leads. But there are many ways of exploring this. One of the still unexplained, so far as I can tell, and amazing features of the brain is its tremendous plasticity.
Now we’ve got synapses as agents! How about molecules, atoms, and subatomic particles?

Though I’ve not heard Sebastain Seung’s talk, I’ve been reading neuroscientists write about networks of individually complex neurons for 40 years, and they don’t use Dennett’s anthropocentric imagery of “coalitions and cabals”. They don’t have to; they have technical concepts.

So why does Dennett insist on the anthropomorphic metaphors? Why hasn’t he forged a non-technical idiom that doesn’t require them?

The problem isn’t so much that he uses this anthropomorphic idiom, but that it seems to be his primary vehicle for thinking about mind, brain, and culture. He’s not simply using it to explain technical concepts to a more general audience. He’s using it to explain those concepts to himself.

It seems to me that what Dennett has done over the course of his career is use the gene as a conceptual tertium quid between the mind as a whole and the brain as a connection of neurons. He has taken the agency of the Dawkinsian gene, transferred it to the meme, which puts in into the mind in itty bitty things within the larger mind-as-a-whole. And now, in one final step, he infuses that agency into individual neurons, which he no longer conceptualizes as simple logical elements.

The attempt to think of the mind, then, as some complex phenomenon arising out of a hierarchical arrangement of simple switches has failed. Those once simple switches have now become sophisticated agents engaging in coalitions and cabals.

But who knows, maybe those neurons simply stand on turtles all the way down.

No comments:

Post a Comment