Friday, June 28, 2019

Has Dennett Undercut His Own Position on Words as Memes?

I'm bumping this post, originally from April 2015, to the top of the queue because it has elements, which I've highlighted, that are reminiscent of Rodney Brooks's recent talk in which he argues that the computer metaphor is not adequate to understanding what the mind/brain does. Brooks invokes the idea of adaptation and examples from biology. Dennett brings up thermodynamics. Does biology have an advantage there? Powerful computers use huge amounts of energy. Can electronic devices match the energy efficiency of neural tissue? See this remark by Mark P. Mills:

But it’s important to keep in mind that Moore’s Law is, as we’ve noted, fundamentally about finding ways to create ever tinier on-off states. In that regard, in the words of one the great physicists of the 20th century, Richard Feynman, “there’s plenty of room at the bottom” when it comes to logic engines. To appreciate how far away we still are from a “bottom,” consider the Holy Grail of computing, the human brain, which is at least 100 million times more energy efficient than the best silicon logic engine available.
* * * * *


Early in 2013 Dan Dennett had an interview posted at John Brockman’s Edge site, The Normal Well-Tempered Mind. He opened by announcing that he’d made a mistake early in his career, that he opted a conception of the brain-as-computer that was too simple. He’s now trying to revamp his sense of what the computational brain is like. He said a bit about that in that interview, and a bit more in a presentation he gave later in the year: If brains are computers, what kind of computers are they? He made some remarks in that presentation that undermine his position on words as memes, though he doesn’t seem to realize that.

Here’s the abstract of that talk:
Our default concepts of what computers are (and hence what a brain would be if it was a computer) include many clearly inapplicable properties (e.g., powered by electricity, silicon-based, coded in binary), but other properties are no less optional, but not often recognized: Our familiar computers are composed of millions of basic elements that are almost perfectly alike – flipflops, registers, or-gates – and hyper-reliable. Control is accomplished by top-down signals that dictate what happens next. All subassemblies can be designed with the presupposition that they will get the energy they need when they need it (to each according to its need, from each according to its ability). None of these is plausibly mirrored in cerebral computers, which are composed of billions of elements (neurons, astrocytes, ...) that are no-two-alike, engaged in semi-autonomous, potentially anarchic or even subversive projects, and hence controllable only by something akin to bargaining and political coalition-forming. A computer composed of such enterprising elements must have an architecture quite unlike the architectures that have so far been devised for AI, which are too orderly, too bureaucratic, too efficient.
While there’s nothing in that abstract that seems to undercut his position on memes, and he affirmed that position toward the end of the talk, we need to look at some of the details.

The Material Mind is a Living Thing

The details concern Terrence Deacon’s recent book, Incomplete Nature: How Mind Emerged from Matter (2013). Rather than quote from Dennett’s remarks in the talk, I’ll quote from his review, "Aching Voids and Making Voids" (The Quarterly Review of Biology, Vol. 88, No. 4, December 2013, pp. 321-324). The following passage may be a bit cryptic, but short of reading the relevant chapters in Deacon’s book (which I’ve not done) and providing summaries, there’s not much I can do, though Dennett says a bit more both in his review and in the video.

Here’s the passage (p. 323):
But if we are going to have a proper account of information that matters, which has a role to play in getting work done at every level, we cannot just discard the sender and receiver, two homunculi whose agreement on the code defines what is to count as information for some purpose. Something has to play the roles of these missing signal-choosers and signal-interpreters. Many—myself included—have insisted that computers themselves can serve as adequate stand-ins. Just as a vending machine can fill in for a sales clerk in many simplified environments, so a computer can fill in for a general purpose message-interpreter. But one of the shortcomings of this computational perspective, according to Deacon, is that by divorcing information processing from thermodynamics, we restrict our theories to basically parasitical systems, artifacts that depend on a user for their energy, for their structure maintenance, for their interpretation, and for their raison d’ĂȘtre.
In the case of words the signal choosers and interpreters are human beings and the problem is precisely that they have to agree on “what is to count as information for some purpose.” By talking of words as memes, and of memes as agents, Dennett sweeps that problem under the conceptual rug.

In his paper, “The Cultural Evolution of Words and Other Thinking Tools” (PDF), Dennett likens a natural language, such as English, to Java, and words to Java apps. As Dennett well knows, Java is one of those “parasitical systems, artifacts that depend on a user for their energy, for their structure maintenance, for their interpretation.” So how can it possibly be the basis of useful analogical understanding of natural language? Well, back when Dennett wrote that paper (it was published in 2009) Deacon’s book didn’t exist. Now that it does, and Dennett has agreed that it guts the idea that the brain is a digital computer with a top-down architecture over simple logical gates, it seems to me he has to accept the consequences.

His account of memes treats them as entities optimized for communication between digital computers; you know, those things that are parasitically dependent upon external (transcendent if you will) users. Banish those computers and you’ve got to banish those memes. But that’s not what Dennett does. On the contrary, he doubles-down on memes.

The Return of the meme, or Neurons gone wild

Well into that same “If brains were computers” talk, after he’s introduced a bunch of individually interesting things, Dennett offers a jaw-dropping hypothesis. By this time he’s sketched a story of neurons as descendants of unicellular organisms that once roamed free (like all cells in multi-celled organisms); they have, if you will, become domesticated to life with other cells. Dennett introduces the idea of a rogue neuron, which he admits is highly speculative (c. 46:36):
I’m suggesting that maybe some of the neurons in your brain are encouraged to go feral, to regain some of the individuality and resourcefulness of their unicellular ancestors from half a billion years ago. Why? They're released from domesticity where they've been working in the service of animal cognition uncomplainingly so that their talents can be exploited by invading memes competing for influence. That there's a co-evolution between genetic and cultural evolution and once a brain becomes a target for meme invasion, the relaxation or the re-expression of otherwise unrealized talents of neurons creates a better architectural environment for the competition of memes to go forward. Armies of the descendants of prisoners now enslaved by the invaders.
What are we to make of this? Not only are active memes back in this picture but they’re invading brains and enslaving rogue neurons. And just before this he’s told us that (c. 45:00) that memes are “software with attitude. Data structures with attitude.” How, pray tell, can data structures possibly corral neurons gone wild?

And if we’re going to do language this way, then the problem we’ve got to solve is getting bunches of neural agents in different brains to agree on what words are as physical and informatic objects. How do we can these scattered bunches of neurons to order and get them to negotiate a whole raft of conceptual treaties?

What’s Dennett up to?

1) On the one hand, what is he trying to tell us about the brain and culture? 2) On the other hand, why’s he going about it in an at best semi-coherent metaphorical way that makes little or no sense?

That second question is enormously interesting, but is more akin to literary criticism and rhetorical analysis than to natural philosophy, which is what Dennett seems to be up to. I’ll just leave that alone.

As to the first question, I surely don’t know. The problem he’s dealing with seems to go like this: Human beings have certain neuro-mental equipment inherited from biology. While the nature of this equipment is somewhat obscure, there’s nothing particularly problematic about its existence. We also deliberately design all sorts of artifacts and practices, some simple (e.g. toothpicks, scraping mud off the soles of our shoes), some complex (e.g. cathedrals, Shakespeare plays). Just how we go about that is deeply obscure, but not particularly problematic.

In between we’ve got things like language that aren’t the result of deliberate conscious design, and their biological foundations are obscure and problematic. They’re creatures of culture, but not under our deliberate control. Dennett accounts for these things through memes, cultural agents that somehow control us rather than being under our control.

It’s not a very useful conceptual solution to the problem. Admitting “I don’t have a clue” seems to me a better first step than constructing these baroque intellectual fantasies. Alternatively, Dennett could read my book on music, Beethoven’s Anvil, were I lay the foundations of a neural account of collective musical cultures.

But that’s a different, and rather more complex story, best left for another day.

Addendum: Dennett almost redeems himself at the very end of the “If brains are computers” talk. How? He talks about phonemes and asserts (c. 1:07:17) “This I submit is the key evolved innovation, the design feature, which made human culture possible, which made human minds possible.” Why? Because phonemes are the way the speech stream is digitized. And without such digitization, language wouldn’t work.

Well, I think many things were necessary for human culture to be possible. I’m not at all sure phonemicization is unique in that. But it’s certainly a necessary condition for language, and without language human culture wouldn’t have blossomed as it has. This is at best a quibble.

What’s important is that Dennett ended up thinking about phonemes. And phonemes are genetic elements in culture, the phonemes, but not the word meaning. Here’s how I recently defined the notion of coordinator, my term for the genetic elements of culture:
Coordinator: The genetic element in cultural processes. Coordinators are physical traits of objects or processes. The emic/etic distinction in linguistics and anthropology is a useful reference point. Phonetics is the study of language sounds. Phonemics is the study of those sound features, phonemes, that are active in a language.

The notion of a coordinator is, in effect, a generalization of the phoneme. A coordinator is a physical trait that is psychologically active/salient in cultural processes.
How and why I arrived at that formulation is more than I want to go into here. You can find the basics in “Language Games 1, Speech” in my working paper, The Evolution of Human Culture: Some Notes Prepared for the National Humanities Center. I note only that it was Kenneth Pike who generalized the linguistic distinction between phonemics and phonetics into the general emic/etic distinction.
 
* * * * *
 
NOTE: Take a look at an old post, Links: Origins of Language and the Problem of Design, June 11, 2010.

No comments:

Post a Comment