First, in asking THAT question I do not intend a bit of cutesy intellectual cleverness: Oh Wow! Let’s get the meme meme to examine it’s own history. My purpose would be just as well served by examining, say, the history of the term “algorithm” or the term “deconstruction,” both originally technical terms that have more or less entered the general realm. I’m looking at the history of the meme concept because I’ve just been reading Jeremy Burman’s most interesting 2012 article, “The misunderstanding of memes” (PDF).
Second, as far as I can tell, no version of cultural evolution is ready to provide an account of that history that is appreciably better than the one Burman himself supplies, and that account is straight-up intellectual history. In Burman’s account (p. 75) Dawkins introduced the meme concept in 1976
as a metaphor intended to illuminate an evolutionary argument. By the late-1980s, however, we see from its use in major US newspapers that this original meaning had become obscured. The meme became a virus of the mind.
That’s a considerable change in meaning. To account for that change Burman examines several texts in which various people explicate the meme concept and attributes the changes in meaning to their intentions. Thus he says (p. 94):
To be clear: I am not suggesting that the making of the active meme was the result of a misunderstanding. No one individual made a copying mistake; there was no “mutation” following continued replication. Rather, the active meaning came as a result of the idea’s reconstruction: actions taken by individuals working in their own contexts. Thus: what was Dennett’s context?
And later (p. 98):
The brain is active, not the meme. What’s important in this conception is the function of structures, in context, not the structures themselves as innate essences. This even follows from the original argument of 1976: if there is such a thing as a meme, then it cannot exist as a replicator separately from its medium of replication.
Burman’s core argument thus is a relatively simple one. Dawkins proposed the meme concept in 1976 in The Selfish Gene, but the concept didn’t take hold in the public mind. That didn’t happen until Douglas Hofstadter and Daniel Dennett recast the concept in their 1982 collection, The Mind’s I. They took a bunch of excerpts from The Selfish Gene, most of them from earlier sections of the book rather than the late chapter on memes, and edited them together and (pp. 81-82)
presented them as a coherent single work. Although a footnote at the start of the piece indicates that the text had been excerpted from the original, it doesn’t indicate that the essay had been wholly fabricated from those excerpts; reinvented by pulling text haphazardly, hither and thither, so as to assemble a new narrative from multiple sources.
It’s this re-presentation of the meme concept that began to catch-on with the public. Subsequently a variety of journalist accounts further spread the concept of the meme as a virus of the mind.
Why? On the face of it it would seem that the virus of the mind was a more attractive and intriguing concept than Dawkins’s original more metaphorical conception. Just why that should have been the case is beside the point. It was.
All I wish to do in this note is take that observation and push it a bit further. When people read written texts they do so with the word meanings existing in their minds, which aren’t necessarily the meanings that exist in the minds of the authors of those texts. In the case of the meme concept, the people reading The Selfish Gene didn’t even have a pre-existing meaning for the term, as Dawkins introduced and defined it in that book. The same would be true for the people who first encountered the term in The Mind’s I and subsequent journalistic accounts.
Defining Abstract Terms
How do we introduce new terms into the language? In particular, how do we introduce new abstract terms, for certainly “meme” is an abstract concept, and a rather slippery one at that? The obvious answer is that we marshal a bunch of existing terms into some pattern, into a text, and say: “See here, THAT’s our new concept.” That’s what Dawkins did in The Selfish Gene; that’s what Hofstadter and Dennett did in The Mind’s I.
In the early 1970s my teacher, the late David Hays, proposed just that account of abstract concepts. His paradigmatic example was charity: Charity is when someone does something nice for someone else without thought of an award. Any story that matches the pattern indicated in the definiens (the italicized portion) is said to be an instance of charity.
Hays advanced this concept in the context of a particular type of semantic theory, know as a semantic or cognitive network:
The nodes designate concepts while the connections between them (called links or arcs) designate relations between concepts:
Thus “person” and “Fred” are concepts and “ISA” is a relation between them, asserting that Fred is a person. Similarly, “hit” is a concept and “AGT” is a relation, assertion that “Fred hits...” And so forth.
In such a model we can think of a text as being generated by a path through a network:
Such networks were thoroughly explored by researchers in a variety of fields during the 1970s and 80s and remain under investigation still. It is reasonable to think of the mind as an extensive cognitive network that is somehow implemented in human brains, reasonable and respectable, but by no means obviously true.
Back to Memes
How does this help us to think about the meme concept? Mostly it gives us a non-verbal model that we can grasp and examine—those diagrams.
In the first place there is the network concept itself as an explicit model of mental organization. The meaning of any one term (that is, node) in such a network, is a function of the structure of the entire network. Terms aren’t pointillistically defined as autonomous entities.
If my cognitive network differs from Richard Dawkins’s network in certain topics, then I’m likely to read what he says on those topics rather differently from what he intended. But even in the case where our networks are relatively congruent, I might arrive at a somewhat different sense of a new term if I arrive at it through a different path through the network. Burman asserts that THAT’s what Hofstadter and Dennett did: In presenting their readers with a different text from Dawkins, though one stitched together from fragments Dawkins himself wrote, they took their readers on a different path through their respective cognitive networks.
We thus have two independent sources of variation. On the one hand, in any given population of readers, there will be differences among their cognitive networks. Some of these differences will be minor, some major. (One of Hays’s early articles on abstract terms explicated different conceptions of “alienation” as different cognitive networks for defining the term.*) Thus, even where they read the same text, they likely will construct differing meanings for that text. On the other hand, given a set of readers with congruent networks, we can describe different paths through the network and therefore arrive at different conceptions of a newly defined term.
Both of these effects would have been working in the readership for memetics. Some people would be expert in biology, as Dawkins himself is. But others would not. Some would be trained academics, but most were not. And so forth.
What does Dennett have to say on such matters? I do not know. He talks about words as memes in his Cold Spring Harbor piece, The Cultural Evolution of Words and Other Thinking Tools, and in From Typo to Thinko, but his treatment tends to be pointillistic, treating words as independent entities. He has nothing to say about how words interact with one another, though he surely knows that such interaction is important.
In The Cultural Evolution of Words Dennett asserts (p. 5):
Similarly, when you acquire a language, you install, without realizing it, a Virtual Machine that enables others to send you not just data, but other virtual machines, without their needing to know anything about how your brain works.
So words are “software structures, like Java applets” (p. 5). When people speak, they are sending one another streams of such applets. But what happens in peoples heads once the applets arrive, that’s completely unspecified.
So, we have the meme applet. Once one has read, say Dawkins’ chapter in The Selfish Gene, that applet has become “installed.” What’s the relationship between that applet and the applets for “gene”, “culture”, and “evolution”? He doesn’t say. That is, he doesn’t give us any way to think about such questions. Somehow it’s in the virtual machine software.
And that, I am afraid, is just not good enough. As it now stands, Dennett’s account of words as memes has nothing to say about meaningful relations between words. He has nothing to offer on the question of why different people have different concepts of “meme” or, for that matter, “deconstruction” or “algorithm.” It must be different “applets” different “virtual machines.” Well, sure. Where do the differences come from?
* David G. Hays (1976). On "Alienation": An Essay in the Psycholinguistics of Science. In (R.R. Geyer & D. R. Schietzer, Eds.): Theories of Alienation. Leiden: Martinus Nijhoff, pp. 169-187.