Tuesday, June 22, 2010

Cultural Evolution 8: Language Games 1, Speech

The key to the treasure is the treasure.
– John Barth

But I’m not talking of language games in Wittgenstein’s sense, though the Wittgenstein of the Tractatus had a considerable influence on me as an undergraduate. No, I’m thinking of game theory, not something I’ve studied, though I did have an undergraduate course on decision theory taught by R. B. Braithwaite. But I’m getting ahead of the game.

As the title says, this post is about language. There’s been a fair amount of work done on language from an evolutionary point of view, which is not surprising, as historical linguistics has well-developed treatments of language lineages and taxonomy, the “stuff” of large-scale evolutionary investigation. While this work is directly relevant to a consideration of cultural evolution, however, I will not be reviewing or discussing it. For it doesn’t deal with the theoretical issues which most concern me in these posts, namely, a conceptualization of the genetic and phenotypic entities of culture. This literature is empirically oriented in a way that doesn’t depend on such matters.

The Arbitrariness of the Sign
In particular, I want to deal with the arbitrariness of the sign. Given my approach to memes, that arbitrariness would appear to eliminate the possibility that word meanings could have memetic status. For, as you may recall, I’ve defined memes to be perceptual properties – albeit sometimes very complex and abstract ones – of physical things and events. Memes can be defined over speech sounds, language gestures, or printed words, but not over the meanings of words. Note that by “meaning” I mean the mental or neural event that is the meaning of the word, what Saussure called the signified. I don’t mean the referent of the word, which, in many cases, but by no means all, would have perceptible physical properties. I mean the meaning, the mental event. In this conception, it would seem that that cannot be memetic.

That seems right to me. Language is different from music and drawing and painting and sculpture and dance, it plays a different role in human society and culture. On that basis one would expect it to come out fundamentally different on a memetic analysis.

This, of course, leaves us with a problem. If word meaning is not memetic, then how is it that we can use language to communicate, and very effectively over a wide range of cases? Not only language, of course, but everything that depends on language. Literature obviously – which I’ll take up in the next post – but much else as well.

Speech as a Means of Communication
Willard van Orman Quine has given us a classic thought experiment that points up the problem of word meaning. He broaches the issue by considering the problem of radical translation, “translation of the language of a hitherto untouched people” (Quine 1960, 28). He asks to consider a “linguist who, unaided by and interpreter, is out to penetrate and translate a language hitherto unknown. All the objective data he has to go on are the forces that he sees impinging on the native’s surfaces and the observable behavior, focal and otherwise, of the native.” That is to say, he has no direct access to what is going on inside the native’s head, but utterances are available to him. Quine then asks us to imagine that “a rabbit scurries by, the native says ‘Gavagai’, and the linguist notes down the sentence ‘Rabbit’ (of ‘Lo, a rabbit’) as tentative translation, subject to testing in further cases” (p. 29). And thus begins one of the best known intellectual romps in the philosophy of language.

Quine goes on to argue that, in thus proposing that initial translation, the linguist is making illegitimate assumptions. Perhaps he begins his argument by noting that the native might, in fact, mean “white” or “animal” and later on offers more exotic possibilities, the sort of things only a philosopher would think of. Quine also notes that whatever gestures and utterances the native offers as the linguist attempts to clarify and verify will be subject to the same problem. Quine’s argument is thorough and convincing.

When he did that work, however, he did not, of course, have access to a range of more recent work in cognitive anthropology and evolutionary psychology that indicated that our adapted minds have a preferred way of parsing the world, as do baboons. To be sure, this is “overwritten” and augmented in culture-specific ways, but those underlying perceptual and cognitive systems do not disappear. To consider a specific example, the work on folk taxonomy (Berlin 1992) suggests that there is a so-called basic level of designation, and that is at the level of “rabbit” and not “animal” (in fact, many languages don’t even have a word at that level of generality). So the linguist is reasonable in assuming “rabbit” is a more likely translation than “animal.” Other considerations are likely to rule out “white” or Quine’s other suggestions. I have no reason to believe that this cognitive architecture so constrains matters that there is only one possible referent for “Gavagai.” But I do think that it is likely to turn out that, all other things being equal, “rabbit” is in fact the best guess.

This situation, of course, is rather different from that of ordinary speech between people who share a common language. In the common situation both parties would know the meaning of “Gavagai.” Yet, however effective it is, ordinary speech sometimes fails to secure understanding between people and, where such understanding is achieved, that achievement has required back-and-forth speech. The mutual understanding is achieved through a process of negotiation. As William Croft reiterates in chapter 4 of Explaining Language Change, we cannot get inside one another’s heads and so must negotiate meanings in conversation.

That is to say, communication through language is not a matter of sending information through a pipeline. It does not happen according to what Michael Reddy (1993) has called the conduit metaphor. Reddy’s article is based on 53 example sentences. Here are the first three (p. 166):
1. Try to get your thoughts across better
2. None of Mary’s feelings came through to me with any clarity
3. You still haven’t given me any idea of what you mean
Reddy’s argument is that many of our statements about communication seemed to be based on the notion of sending something (the thought, idea, feeling) through a conduit, hence he calls it the conduit metaphor. He knows that communication doesn’t work that way, but that’s not is central issue. His central concern is to detail the way we use the conduit metaphor to structure our thinking about communication.

Reddy’s argument is reminiscent of a somewhat earlier argument by Paul de Man, “Form and Intent in the American New Criticism” (1983, first published in 1971). Consider this passage (p. 25):
“Intent” is seen, by analogy with a physical model, as a transfer of a psychic or mental content that exists in the mind of the poet to the mind of a reader, somewhat as one would pour wine from a jar into a glass. A certain content has to be transferred elsewhere, and the energy necessary to effect the transfer has to come from an outside source called intention.
De Man’s point was that, when we read a text, the intention (de Man uses the term in its somewhat rarified philosophical sense) that gives life to those signs on the page is our intention, not the author’s. And he is right.

De Man’s insight, and similar ones by Derrida, Barthes, Foucault and others, had an electrifying effect on literary critics in the United States, leading to a tremendously fertile period in academic literary criticism that, however, became increasingly sclerotic in the 1990s. But that story’s neither here nor there. My point is simply that these thinkers were attempting to deal with a real problem and, ultimately, they failed.

What, for example, could Derrida (1976, p. 158) have possibly meant by proclaiming “There is nothing outside of the text”? What he did not mean is that the world is nothing but a text and a text created by more or less arbitrary social conventions. Read sympathetically, and in context, the phrase seems to mean something to the effect that there is no way we can “step outside” language so as to examine, in full omniscient and transcendental objectivity, the relationship between language and the world. And that, it seems to me, is true. We’re always going to be immersed in “language,” whether natural or the various languages of science and mathematics.

How, then, do we fly free of the bottle? We play games.

Language Games & Game Theory
Where de Man argues that intent cannot be transmitted from one speaker to another like pouring wine from a jar, William Croft points out that linguistic communication is tricky “precisely because our thoughts cannot leave our heads” (2000, p. 111). Croft is a linguist who has undertaken to explain language change using an evolutionary approach. He defines a language to be “the population of utterances in a speech community” (p. 26), thus focusing our attention, not on some abstract language system, but on the concrete production of speech.

How does Croft deal with the fact that we cannot transmit thoughts directly to another’s mind? He argues that meaning is negotiated in the back-and-forth of conversation and draws on game theory to make his argument (p. 95):
There is a problem here: the hearer cannot read the speaker’s mind, and she can’t read his. This is what is called a COORDINATION PROBLEM. In speaking and understanding, speaker and hearer are trying to coordinate on the same meaning.
Croft then introduces the notion of a third-party Schelling game in which two players “are presented by a third party with a set of stimuli” which helps them converge on the same meaning. Sometimes it works, sometimes not. One possibility, he argues, is to use “natural perceptual or cognitive distinctiveness [as] a COORDINATION DEVICE” (p. 96). That gives us the adapted mind that I invoked in discussing Quine’s problem. Croft goes on to discuss a variety of linguistic devices as non-conventional coordination devices.

While the details are interesting and important – I recommend his discussion to you – we need not worry about them now.

Save one. Croft notes that, in order for speaker and hearing to reach agreement in conversation their mental states “need not be identical, though it is assumed that they are systematically related” (p. 99). Later on he notes that (114):
successful communication involves not the recovery of and original, ‘correct’ interpretation of the speaker’s original intention, but instead an interpretation that evolves over the course of the conversation, and is assessed by the success or failure of the higher social-interactional goals that the interlocutors are striving to achieve.
One reason why this effort is not doomed to failure from the beginning is the fact that although we cannot read each other’s minds, we do inhabit a shared world.

Croft’s general point, then, is simple, speech communication is a two-way interaction, not the one-way transmission of meaning, information, whatever, though a channel. De Man’s problem is thus solved for the case of face-to-face interaction, a common case, and surely the most basic one. Note that this solution does not involve recourse to a transcendental signified nor to stepping outside the text, nothing like that. It involves the ordinary and obvious means of interactive speech. In this sense, the key to the treasure, is the treasure. Nothing else is required.

But what, you may ask, of written communication, where direct interaction is not possible? After all, de Man was a literary critic, writing about the reading of written texts. What about that?

Good question. I’m going to punt on it. But I observe that some written communication – correspondence – does involve interaction, but at a slower pace than conversation, often much slower. In the case of literary texts, yes, readers cannot ordinary interact with authors, but they can interact with one another. I’ll say a little about that in the next post. Beyond that, yes, there are issues, serious issues. But this is not the place to address them. My concern here is just to get things started.
Note: Mathematician and psychologist Mark Changizi (1999) has an interesting argument about why vagueness of word meaning is essential to the proper functioning of language. His argument is grounded in considerations of computability and I recommend it to you. It makes an interesting complement to the game-theoretic conception of speaking.
Addition: See subsequent post reporting an experiment that David Hays did at RAND in the mid-1950s. It’s relevant to the game theoretic treatment of conversation.
What is a language and what are the memes? 
Now I want to shift gears a bit and work my way back to the physical “side” of the linguistic sign, because that’s where we’re going to go looking for memetic entities.

Throughout this post I’ve been assuming that we know what a language is. Now I want to get picky. Here’s what Sidney Lamb has to say in Pathways of the Brain. He’s talking about Roman Jakobson, the great linguist (p. 41):
Using the term language in a way it is commonly used . . . we could say that he spoke six languages quite fluently: Russian, Czech, German, English, Swediksh, and French, and he had varying amounts of skill in a number of others. But each of them except Russian was spoken with a thick accent. It was said of him that “He speaks six languages, all of them in Russian.” . . . the evidence indicates that from a neurocognitive point of view there is no such unit as a language. What exists from a neurocognitive point of view is not so much one linguistic system as a group of interconnected systems, relatively independent from one another.
Lamb goes on to assert that (p. 42):
Professor Jakobson’s internal linguistic information included a single phonological system, that of his native Russian, together with separate systems of grammar and lexicon for Russian, Czech, English, German, French, and Swedish – with some overlap in these grammars and lexicons . . . along with his more limited abilities in various additional languages; plus a conceptual system connected to them all.
So far we’ve been concerned with how meaning is negotiated, where meaning is a matter of the conceptual system. That’s on one “side” of the arbitrary sign, the side inside the brain. Now we’re going to look at the other “side” of the sign, the side that’s in public view, the physical sign. It’s that physical side that most differs among languages.

The question before us is: How do we conceptualize the memetic elements of language? In glossing the emic/etic distinction in a comment to John Wilkins I remarked that (now I’m simply repeating that comment) the distinction originates in linguistics, in the distinction between phonetics and phonemics. The former is about the psychophsics of speech sound while the latter is about phoneme systems. These are obviously very closely related matters, but they aren’t the same. We tend to perceive the speech stream as consisting of discrete sound entities, syllables and phonemes; this is the domain of phonemics. But the speech signal is, in fact, continuous. If you look at a sonogram of some chunk of speech, you don’t draw a series of vertical lines through it separating one phoneme from another; nor can you snip a tape recording into phoneme-long or syllable-long segments and reassemble it into something that sounds like natural speech. The aspects of the speech stream which are phonemically active differ from one language to another, which is why foreign languages all sound like “Greek.” Independently of the fact that you don’t know what the words mean or how the syntax works, you can’t even hear the phonemes in the speech stream.

Now, that’s the distinction I’m after, between phonemes and the raw speech stream. That’s the distinction I drew in my discussion of music (third post). Phonemes are those properties of the speech stream that are linguistically active. We need, however, to distinguish between segmental phonemes and suprasegmental phonemes. The segmental phonemes are roughly parallel to the letters of an alphabetic writing system. Suprasegmentals include tone, stress, and prosodic patterns. And then we need to consider ordering as well, as the order in which elements occur is certainly a property of the speech stream, and a most important one.

Before thinking about order, thought, we need to think a bit more about what’s going on. Roughly speaking, two things need to be extracted from the speech signal: 1) word identities (to be somehow linked to word meanings), and 2) the relations between the words (syntax). My quick take on matters – I’m not a linguist and I’ve not thought this through – is that both segmental and suprasegmental phonemes are involved in both of those processes. Relations between words are often indicated by word affixes, which are realized through segmental phonemes. Word identities are certainly realized by segmental phonemes, but tone and accent are involved as well.

Beyond this, relations between words are signaled by word order. In linguistic typology, typical word order is the primary trait on which classification based. Thus one has SVO languages (subject-verb-object), VSO languages (verb-subject-object), and so forth. As those designations suggest, word order indicates grammatical function, that is, relations between words.

Thus between word order and phonemes we’ve got a rich set of memetic elements. And we could also consider morphology in here as well. Taken together these aspects of the speech signal seem to be as memetically rich and abstract as the musical properties we looked at in discussing Rhythm Changes (first post).

* * * * *

We’ve got one last matter to attend to in this section: how does this discussion square with Croft’s treatment? After all, I’ve pretty much adopted in view of meaning as being negotiated in speech, what about his view on memetic issues? That’s tricky. So, in some ways our views are quite different. Different terms, different definitions. But we agree that selection acts on the physical material of spoken language, on utterances. That’s enough to get this conversation started.

In what follows I offer a few notes about the tricky differences in our terms and definitions. Croft has been influenced by Richard Dawkins and David Hull and so thinks of memetic elements as things that replicate rather than as properties that allow a common apprehension of the speech signal.

The emic/etic distinction seems to play little or no role in his thinking about how to characterize language as an evolving phenomenon, while it is central to mine. And he thinks of the phenotypic element as an interactor in Hull-Dawkins terminology and identifies that with the speaker, which is really quite different from my identification of the phenotypic element with the physical speech stream itself. And I do that – in parallel with my treatment of music – because that’s what’s acted upon in selection. But on that particular issue, Croft takes the same view that I have. He defines a language as (p. 26) “the population of utterances in a speech community.” He regards the utterance as the linguistic equivalent of DNA, which is fine, and coins the term “lingueme” to designate the memetic elements of language. Utterances consist of linguemes.

In Conversation
By way of starting to bring this post to close, let me paraphrasing and recasting a passage from my essay-review of Steve Mithin’s The Singing Neanderthals (Benzon 2005).

In these discussions I have been assuming that the nervous system operates as a self-organizing dynamical system as, for example, Walter Freeman (e.g. 1995) has argued. Using Freeman’s work as a starting point, I’ve argued that, when individuals are making music with one another, their nervous systems are physically coupled with one another for the duration of that musicking (Benzon 2001, pp. 47 ff.). There is no need for any symbolic processing to interpret what one hears or so that one can generate a response that is tightly entrained to the actions of one’s fellows.

My earlier arguments were developed using the concept of coupled oscillators which has been applied to the phenomenon of synchronized blinking by fireflies (Strogatz and Steward 1993). Such tightly synchronized activity, I argued, is a critical defining characteristic of human musicking. What musicking does is bring all participants into a temporal framework where the physical actions – whether dance or vocalization – of different individuals are synchronized on the same time scale as that of neural impulses, that of milliseconds. Within that shared intentional framework the group can develop and refine its culture. Everyone cooperates to create sounds and movements they hold in common.

There is no reason whatever to believe that one day fireflies will develop language. But, obviously, human beings have already done so. I believe that, given the way nervous systems operate, musicking is a necessary precursor to the development of language. And there is evidence that talking individuals must be within the same intentional framework. Consider an observation that Mithen offers early in his book (p. 17). He cites work by Peter Auer who, along with his colleagues, has analyzed the temporal structure of conversation. They discovered that, when a conversation starts, the first speaker establishes a rhythm to which the other speakers time their turn-taking. That is, even though they are only listening, other parties are actively attuned to the rhythm of the speaker’s utterance (cf. Condon 1986). What if this were necessary to conversation, and not just an incidental feature of it?

Thus we have the memetic features of the speech stream coupling speaker and hearing together into a single dynamical system. Assuming, for the sake of argument, that we have a two-party conversation. When the parties enter into their conversation they each give up many degrees of behavioral freedom and agree to cooperate in arriving at a mutual understanding. Each party is internally partitioned so that meaning “happens” on one side of the partition, but not the other. Let’s call the meaning side the meaning system and the other side the signifying system. It is the two signifying systems that are physically synchronized with one another. And it is the meaning systems that are playing a cooperative game with one another. They play the game by manipulating their respective signifying systems so as to send signals to one another.

A most peculiar activity.

Next Two Posts
The next post will discuss literature, conceptualizing literary texts performances in group-wide coordination games. The final post will attempt to wrap things up by discussion Nina Paley’s film, Sita Sings the Blues.

A Note About John von Neumann
It has long been obvious to me that the cognitive sciences are what happened when the computation and the computer hit the behavioral sciences as a source of models and metaphors. That means that the cognitive sciences owe a debt to John von Neumann, the Hungarian mathematician who is widely credited for coming up with the scheme for realizing digital computation in electronic circuitry. Though the term “von Neumann machine” is widely applied to machines based on serial computation, it is worth noting that von Neumann is also invented the concept of a cellular automaton, which is a scheme for parallel computation.

More to our point in this post, von Neumann is also one of the founders of game theory. Thus he stands behind some of the most interesting work that’s been done in behavioral biology over the past two or three decades. Though he’s not often mentioned in discussions of evolutionary psychology – at least not in the discussions I’ve read – his formative role in game theory has him standing behind evolutionary psychology as he stands behind the cognitive sciences.

Both of them are disciplines ultimately grounded in computation. Thus, if humanists are to fully benefit from the newer psychologies, we must come to terms with computation in one way or another. We need not become expert in either the theory or the practice of computation, but we must become sufficiently comfortable so that we can fruitfully collaborate with experts.

Benzon, William (2001). Beethoven’s Anvil. Basic Books.

Benzon, William (2005). Synch, Song, and Society. Human Nature Review 5, 2005, pp. 66-85.

Berlin, Brent (1992). Ethnobiological Classification: Principles of Categorization of Plants and Animals in Traditional Societies. Princeton, Princeton University Press.

Changizi, Mark A. (1999) Vagueness, rationality and undecidability: A theory of why there is vagueness. Synthese 120: 345-374.

Condon, W. S. (1986). Communication: Rhythm and Structure. Rhythm in Psychological, Linguistic and Musical Processes. eds. J. R. Evans and M. Clynes. Springfield, Illinois, Charles C Thomas • Publisher: 55-78.

Croft, William (2000). Explaining Language Change: An Evolutionary Approach. Longman.

De Man, Paul (1983). Form and Intent in the American New Criticism. Blindness and Insight. University of Minnesota Press, 20-35.

Freeman, W. J. (1995). Societies of Brains: A Study in the Neuroscience of Love and Hate. Hillsdale, NJ, Lawrence Erlbaum.

Lamb, Sydney (1999). Pathways of the Brain: The Neurocognitive Basis of Language. John Benjamins Publishing Company.

Quine, Willard van Orman (1960). Word and Object. MIT Press.

Reddy, Michael J. (1993). The conduit metaphor – a case of frame conflict in our language about language. Metaphor and Thought (2nd edn), ed. Andrew Ortony, 164-201. Cambridge University Press.

Strogatz, S. H. and I. Stewart (1993). "Coupled Oscillators and Biological Synchronization." Scientific American (December): 102-109.

Previous Posts in this Series
Cultural Evolution 1: How “Thick” is Culture?

Cultural Evolution 2: A Phenomenological Gut Check on Gene-Culture Coevolution

Cultural Evolution 3: Performances and Memes

Cultural Evolution 4: Rhythm Changes 1

Cultural Evolution 5: Rhythm Changes 2

Cultural Evolution 6: The Problem of Design

See also The Sound of Many Hands Clapping: Group Intentionality, which considers a very simple case of group behavior and thus is relevant to the issue of culture's collective nature. Consider this post an elaboration of my discussion of music in CE3: Performances and Memes.


  1. From all the readings I've done and original research over the past 30 years on the topic, it would appear that the arbitrariness of the sign isn't exactly as advertized. Very strong groundings in neural processing, acoustics, oral preadaptations involved in materials processing (chewing, swallowing), deep interconnectivities of body modules and parallelism between them and their control, and similarly for sensoria, and a common neural code, conspire to create the iconic anlagen of nonarbitrariness.

    But due to processing constraints having to do with automaticization, hierarchicalization, etc. of signal planning, production, reception, and interpretation, the internal transparency of forms lower down dies away while functionally shifting to higher levels. Thus the languages with the largest numbers of transparent forms tend to have much less morphosyntactic elaboration, and esp. reliance on hierarchy in syntax. Conversely, the languages with the largest morphologies, deepest derivations, etc. rely much more on hierarchical structure, have the fewest ideophonic forms (and even many onomatopes tend to be relatively arbitrarized), and very little transparency at the lexical or prelexical level.

    Their paradigmatic and pragmatic memberships and orderings, on the other hand, have an almost iconic transparency. Given that polysynthetic languages make even normal lexical items more pragmatic in their use, rather than obligatory in the predication, what we have here is a sort of structural inversion, and such systems tend to be metastable.

    So a lot going on here, not easily amenable to shallow analyses as are common in linguistic circles- a much more thoroughgoing holistic vantage is necessary. Heck, it took me 30 years of concerted effort, and I still don't see the complete picture.

    Jess Tauber

  2. Very interesting. I'm reminded of a linguistics paper I did long ago on biological metaphors (not on biological origins): http://www.umich.edu/~jlawler/mimicry.pdf, also of Charlie Pyle's work on the Duplicity of Language: http://www.modempool.com/pyle/dup/dup.html

    It's probably not true that "the emic/etic distinction plays no role in" Bill Croft's thinking. It necessarily plays a gigantic role in every linguist's thinking, as part of la double articulation. But not all language phenomena are amenable to emic/etic analysis; it works best with smallish (say, n < 1000) sets of relatively invariable elements, like phonemes (n < 100), morphology, and lexical roots.

    It doesn't help at all with syntax or semantics, let alone pragmatics. That's why Chomsky's approach to syntax was so popular -- it wasn't that it worked (it doesn't), but that it was actually an approach to syntax, which the Bloomfieldians had treated as, essentially, too fucking hard. Pike did try, with the tagmeme, but he didn't succeed, as he himself acknowledged.

    My own take is fairly radical for a linguist, I'm told. There's no such thing as "the English language" [for more generalizations, insert any language name here], for example. Instead, everybody makes up their own language, and then we all spend the rest of our lives trying to pass as English speakers, with varying degrees of success. "The ___ language" meme is the result. Not to mention "correct English".

    "Language" by itself isn't a meme so much as it's a species behavioral trait, like picking your nose or pissing. Somewhat more varied in structure, however.

  3. On your comment on the text by de Man: it is all right "our" intention that animates the text and constructs meaning, but "our" intention cannot mean the intention of an isolated reader: it is a shared intention, shared by that reader with other readers of the same text with whom one would want to communicate, and also with the author. There is a community of meaning ranging across space and time; not a solid community of clear-cut meaning (there are fuzzy edges, negotiations as you say, etc.) but an interactive community nonetheless, and the author is part of that. Therefore, authorial intention cannot be dismissed any more than our own intentions as readers, and the communicative intent that we share with our own addressees when we try to make sense of a text.

  4. John:

    My own take is fairly radical for a linguist, I'm told. There's no such thing as "the English language" . . . for example. Instead, everybody makes up their own language, and then we all spend the rest of our lives trying to pass as English speakers . . .

    Yes. And, as a result, in the aggregate and over time, the the utterance pool of the language changes, it drifts. Now, just how most effectively to conceptualize this, I don’t know. But I agree with you; the notion of AN English language is an idealized abstraction.

    On emic/etic, Croft certainly doesn’t invoke it when laying out how he maps elements of the linguistic system onto the Dawkins/Hull scheme (replicator, interactor, selection, lineage) for selective systems in chapter 2. He’s clearly thinking in terms of replicators as tokens that get scattered around, where replicator is Dawkins’ generalization over gene and lingueme is Croft’s term for the replicators of language. While my notion of memes – memetically active properties – can be conceptualized that way, I find it awkward. For my conception is embedded in a particular view of how we interact with one another through language, well, really, through music as that’s where I’ve done my basic conceptual work. And the point of that conceptual work was to arrive at an account of such interaction at the neural level.

    Now, I almost typed “communication” instead of “interaction.” But I stopped and substituted “interaction” because that word is less likely to conjure up that pesky old conduit metaphor while “communication” almost inevitably will. I mean, even if you’ve read Reddy and said “amen, brother” to yourself a million times while reading that article, even then that conduit crap will rear its head when you think about communication.

    As a corrective let me offer an odd analogy which, however, I rather like.

    A number of years ago I say a TV program on the special effects of the Star Wars trilogy. One of the things the program explained was how the Jabba the Hut puppet was manipulated. There were, I think, perhaps a half dozen operators for the puppet, one for the eyes, one for the mouth, one for the tail, etc. Each had a TV monitor which showed him what Jabba was doing, all of Jabba, not just their little chunk of Jabba. So each could see the whole, but manipulate only a part. Of course, each had to manipulate his part so it blended seamlessly with the movements of the other parts. So each needed to see the whole to do that. That seems to me a very concrete analogy to what musicians have to do in performing together. But it’s also a way to think about conversational partners interacting with one another, where the conversation itself, the utterances, corresponds to Jabba.

    The thing about the operators of the puppet is that they’re not sending signals to one another, they’re sending signals to the puppet. But they all see Jabba moving on their monitors. In conversation we do, of course, send signals to one another, signals in the form of mechanical waves propagated through the air. But the meaning isn’t IN those signals. Rather, it’s construed in each person’s mind as the signal unfolds. And it’s that meaning that’s negotiated and shaped by the conversation, by emitting mechanical waves.

  5. John: As for emics and syntax and semantics, of course syntax and semantics take place in the brain. They are not, in principle, available for memetic use (that is, emic) in my conception. That’s almost the point of the conception, to draw a strict line between what’s outside in the world and therefore publicly accessible and what’s inside the head, and so private. But the central point of my discussion of Rhythm Changes is that what’s out there in the world can be pretty complex. Now, you’re not going to hear Rhythm Changes as a coherent musical entity unless you’ve already learned the appropriate musical syntax; without that syntax running in your brain you’re not going to perceive the long-distance relationships that hold Rhythm Changes together into a coherent whole. And with language as well, without syntax running in the head you’re not going to register much of the structure that’s there in the speech signal, but that structure’s still there. If it weren’t your inner syntax engine wouldn’t have any purchase on what’s coming in.

    Jess: When I was drafting this a little voice told me “but you know, the sign isn’t completely arbitrary.” But I chose to ignore it because what I’ve got to say is complicated and messy enough without dealing with that.

    Jose: Actually, I agree with you, more or less. But I think we’ve got to grant de Man his point first. Then we can go about explicitly constructing a concept of shared intention. As long as we simply assume that shared intentionality without thinking about how it could arise we're going to miss something critical. That’s part of what I’m up to. I want to explicitly construct a concept of shared intentionality, and do so while insisting on attending to the physical reality of how these interactions take place. I have some remarks on this in a related post, on synchronized clapping. And I’ll say a bit more in my next post, on story telling.

    Thanks, guys.

  6. I once heard a story about a man who grew up Jewish but who never learned much Hebrew or Yiddish. Whenever, as a child, he did something stupid his grandfather would mutter "chacham". The man grew up understanding the word to mean "stupid" but, in fact, it is both Hebrew and Yiddish for "smart person". The grandfather was being sarcastic.

    For all that he misunderstood the way his grandfather was conveying his meaning the man nevertheless understood the intended meaning. He had even learned to use the word "chacham" in a way that would be appropriate at least some of the time (like the broken clock which shows the correct time twice a day). His understanding of the word, however, could easily lead him to use it to mean exactly the opposite of what he intended (e.g., protesting "I'm no chacham!" meaning to say that he was no dummy).

    Note, also, that in any one instance the grandfather might have been "speaking Hebrew", might have been "speaking Yiddish", or might not have any clear notion of what language he was speaking...and that none of these alternatives would necessarily affect the meaning of what he was saying.

  7. More on emics: I’ve posted on Rhythm Changes, a complex memetic entity in jazx culture. I’m a pretty good jazz musician; Rhythm Changes is second nature to me. I can tell in seconds if a tune is based on Rhythm Changes, and I can jam on Rhythm Changes anytime, day or night.

    Now, for the last few years I’ve been jamming with some guys who are mostly into folk music and rock. For them the Grateful Dead are the tip top of musical excellence, whereas for me, the Grateful Dead are, well, the Grateful Dead. These guys might recognized Rhythm Changes, but they can’t play them.

    But I’m in much of the same position with their home repertoire, even though almost every tune in that repertoire is simpler than the jazz tunes that are the center of my repertoire. I can pick up new jazz tunes fairly quickly. If it’s not formidably complex, I can jam on a tuen without ever having played it before. I can adjust to the flow in real-time. Not so with these simpler rock and folk tunes.

    And most of the time the problem is that I don’t know the boundaries of the form. There are a lot of tunes in this repertoire that don’t conform to the standards of the 12-bar blues or the 32-bar “standard.” They have some other form, and I can’t always tell what that form is, I can’t figure out the repetitions on the fly.

    And so I founder when I have to solo. I don’t know what I’m aiming for. I don’t know when the end of the phrase is coming up nor, by implication, do I know when a new one is starting. Without knowing those utterly simple formal boundaries, I’m lost. My music-syntax-machine doesn’t know how to “grab on” to the flowing sound. Until I learn to recognize the structure in the sound flow, it’s memetic forms, it’s emics, I’m lost. But the best way to learn the form is to have the syntax machine throw out guesses and see what happens.

    Wallace Chafe has a very interesting book, Discourse, Consciousness, and Time, in which he analyzes a corpus of spoken speech. Speech tends to happen in 3 to 4 second chunks. I wonder what 3 to 4 second emic elements are lurking there in his analyses?

  8. Distributed Phonology

    A Replicated Typo recently published a number of abstracts, including this one:

    Robert F. Port, Rich memory and distributed phonology, Language Sciences, Volume 32, Issue 1, January 2010, Pages 43-55: doi:10.1016/j.langsci.2009.06.001

    Abstract: It is claimed here that experimental evidence about human speech processing and the richness of memory for linguistic material supports a distributed view of language where every speaker creates an idiosyncratic perspective on the linguistic conventions of the community. In such a system, words are not spelled in memory of speakers from uniform letter-like units (whether phones or phonemes), but rather from the rich auditory patterns of speech plus any coupled visual, somatosensory and motor patterns. The evidence is strong that people actually employ high-dimensional, spectro-temporal, auditory patterns to support speech production, speech perception and linguistic memory in real time. Abstract phonology (with its phonemes, distinctive features, syllable types, etc.) is actually a kind of social institution – a loose inventory of patterns that evolves over historical time in each human community as a structure with many symmetries and regularities in the community corpus. Linguistics studies the phonological (and grammatical) patterns of various communities of speakers. But linguists should not expect to find the descriptions they make to be explicitly represented in any individual speaker’s mind, much less in every mind in the community. The alphabet is actually a technology that has imposed itself on our understanding of language.

    I rather like this, particularly this notion of high-dimensional patterns and the view that “every speaker creates an idiosyncratic perspective on the linguistic conventions of the community” (see John Lawler’s comment above). I do not, however, have the time to figure out why and how this is consistent with the views I present above, though I suspect that I’d have recourse to the work of Peter Gärdenfors, Conceptual Spaces: The Geometry of Thought (not related to Fauconnier’s rather different notion).

  9. Quine’s argument (re "Gavagai")is thorough and convincing.

    Nyet. It's merely cogent, and really quite superficial, even in anthropological terms. The old-fashioned, anthro.linguists would collect a massive amount of data on the oral tradition from dozens if not hundreds of speakers, first off, before making definitive statements re meaning, or reference (definition, etc). One utterance would not suffice--perhaps one thousand. And how does a Quine-in-the-fields know it's not a sentence, or say a reference to a rabbit-deity, instead of a rabbit? Or perhaps...it's one of the elders' rabbits (or deer, or wart-hog, what have you), etc.

    For that matter Quine's crude Gavagai example has little or no bearing on written language. The basic morphemes of indo-european syntax are not provisional in the sense Quine thought (and most behaviorists and Darwin-bots) Meaning, even in Quine's intensional sense, does not fluctuate nearly to the degree he wanted it too: the I-E words for mother, "madre", "mutter", etc. (maitreya or something in sanskrit) have been around probably 5000 years, if not more.

    Hey how about a golf meme? That might fly with the phrat-boys at BerubeCo too. The evolution of...Golf

  10. Surely you don't think that Quine was under the impression that he was giving a realistic account of fieldwork? If you do, then you are even deeper into making stuff up than I'd suspected.


  11. This comment has been removed by the author.

  12. Well, you made use of the example, and approve it. It's not sound anthropology nor even that profound as philosophical analysis. Kapn' Quine was primarily interested in...demolishing Mind (as in say, intentions, ideas, concepts, meaning, etc). And the Gavagai pseudo-explanation helped him with that goal, which you obviously share.

  13. Well, you made use of the example, and approve it.

    Puhleeze, gimme a break. I also recognized that Quine was conducting a thought experiment and assumed, apparently naively, that readers would know that as well. Why? Because that's what philosophers do.

  14. Learning a strange language in the field: here's how it's done, or, at any rate, how a master of the craft does it.