Pages in this blog

Wednesday, July 10, 2013

Thinking About Language Evolution in the 21st Century

With Some Remarks on Memes, or Dennett Upside Down Cake

About two years ago Wintz placed a comment on Replicated Typo’s About page in which he lists several papers that make good background reading for someone new to the study of linguistic and cultural evolution. I’ve just blitzed my way through one of them, Language is a Complex Adaptive System (PDF) by Beckner et al (2009)*, and have selected some excerpts for comment.

The point of this exercise is contrast the way things look a young scholar starting out now with the way they would have looked to a scholar starting out back in the ancient days of the 1960s, which is when both Dennett and I started out (though he’s a few years older than I am). The obvious difference is that, for all practical purposes, there was no evolutionary study of language at the time. Historical linguistics, yes; evolutionary, no. So what I’m really contrasting is the way language looks now in view of evolutionary considerations and the way it looked back then in the wake of the so-called Chomsky revolution—which, of course, is still reverberating.

Dennett’s thinking about cultural evolution, and memetics, is still grounded in the way things looked back then, the era of top-down, rule-based, hand-coded AI systems, also known as Good Old-Fashioned AI (GOFAI). In a recent interview he’s admitted that something was fundamentally wrong with that approach. He’s realized that individual neurons really cannot be treated as simple logical switches, but rather must be treated as quasi-autonomous sources of agency with some internal complexity. Alas, he doesn’t quite know what to do about it (I discuss this interview in Watch Out, Dan Dennett, Your Mind’s Changing Up on You!). I’m certainly not going to claim that I’ve got it figured out, I don’t. Nor am I aware of anyone that makes such a claim. But a number of us have been operating from assumptions quite different from those embodied in GOFAI and Language is a Complex Adaptive System gives a good précis of how the world looks from those different assumptions.


Usage Based Theories

p. 8:
Usage - based theories of language acquisition (Barlow & Kemmer, 2000) hold that we learn constructions while engaging in communication, through the “interpersonal communicative and cognitive processes that everywhere and always shape language” (Slobin, 1997). They have become increasingly influential in the study of child language acquisition (Goldberg, 2006; Tomasello, 2003).... They have turned upside down the traditional generative assumptions of innate language acquisition devices, the continuity hypothesis, and top - down, rule - governed processing, replacing these with data - driven, emergent accounts of linguistic systematicities.
In this case, of course, generative alludes to Chomsky’s theorizing, which revolutionized linguistics starting in the late 1950s and 1960s. Chomsky took ideas from formal logic and applied them to the grammar of natural language. In this view the syntactic structure of a sentence is seen to be analogous to a proposition in a formal system. The derivation of that structure is analogous to a formal proof.

In Aspects of the Theory of Syntax (1965) Chomsky introduced the distinction between linguistic competence and performance. The distinction has proven to be interesting, influential, vague, and problematic. Roughly speaking, competence is about the idealized grammar as stated in a formal description. Performance is what happens in actual speech, with its imperfections, slips of the tongue, and finite memory. The last is very important as Chomsky’s formal model(s) found natural language syntax to be formally equivalent to a universal Turing machine. But that requires infinite memory, and no real computing device, nor even the human brain, has infinite memory available.

Continuing on p. 8:
Constructionist analyses chart the ways in which children’s creative linguistic ability, their language system, emerges from their analyses of the utterances in their usage history using general cognitive abilities, and from their abstraction of regularities within them. In this view, language acquisition is a sampling problem, involving the estimation of the population norms from the learner’s limited sample of experience as perceived through the constraints and affordances of their cognitive apparatus, their human embodiment, and the dynamics of social interaction.
Back in the 1960, 70, and into the 80s, computational approaches to language relied on hand-coded grammars and semantics, and prespecified acoustic analysis. Starting in the middle and late 1970s statistic techniques were developed for speech recognition where the computer learned how to map continuous speech into discrete phonemes, syllables and morphemes. Such techniques were then applied to other aspects of language processing, most notably machine translation.

Ideolects and Beyond

Skipping on to p. 12:
Language exists both in individuals (as idiolect) as well as in the community of users (as communal language). Language is emergent at these two distinctive but inter - dependent levels: an idiolect is emergent from an individual’s language use through social interactions with other individuals in the communal language, while a communal language is emergent as the result of the interaction of the idiolects. Distinction and connection between these two levels is a common feature in complex adaptive systems. Patterns at the collective level (such as bird flocks, fish schools, or economies) cannot be attributed to global coordination among individuals; the global pattern is emergent, resulting from long - term local interactions between individuals.
Now, it’s one thing to talk about flocking as an emergent behavior of individual birds. That’s relatively simple behavior compared to language behavior, which necessarily trails off into everything else through semantic linkage. Just what kind of thing is this “communal language”? Does it make any sense at all to characterize its structure and mechanisms by an idealized formal grammar of linguistic competence? Not much, I’d say, not much.

Continuing on p. 12:
Therefore, we need to identify the level of existence of a particular language phenomenon of interest. For example, language change is a phenomenon observable at the communal level; the mechanisms driving language change, such as production economy and frequency effects that result in phonetic reduction, may not be at work in every individual in the same way or at the same time. Moreover, functional or social mechanisms that lead to innovation in the early stages of language change need not be at work in later stages, as individuals later may acquire the innovation purely due to frequency when the innovation is established as the majority in the communal language.
In short, there is no such thing as THE English language, or THE Mandarin language. We’ve got individual ideolects on the one hand, and various convergences and divergences among them.

Now, on p. 13
Both communal language and idiolects are in constant change and reorganization. Languages are in constant flux, and language change is ubiquitous. At the individual level, every instance of language use changes an idiolect’s internal organization (Bybee, 2006).
Think about that for a minute: “every instance of language use changes an idiolect’s internal organization.” Really? Is language really that Heraclitean? I don’t know, but I’ve seen reports by neuroscientist Walter Freeman that are very suggestive in this regard.

Freeman investigates the olfactory cortex in rats (see, e.g. Walter J. Freeman, How Brains Make Up their Minds, Weidenfeld and Nicholson, 1999). He’s discovered that when a rat learns to identify a new odorant, two things happen. Of course, a new neural “signature” for that odorant is added to the rat’s repertoire of neural signatures. What’s more interesting, the existing signatures for the body of learned odorants all change. The whole system is reconfigured to in response to the new learning (p. 83):
A new odorant is learned by adding a new attractor with its basin, but, unlike a fixed computer memory, an attractor landscape is flexible. When a new class is learned, the synaptic modifications in the neuropil jostle the existing basins of the packed landscape, as the connections within the neuropil form a seamless tissue. This is known as attractor crowding. No basin is independent of the others.
It’s difficult to see how those attractors, and their basins, correspond to Dennett’s rigidly fixed memes that pass from one brain to another. Dennett’s memes, which he has compared to computer viruses, are like those hand-coded rules of Good Old Fashioned AI systems. They are “imposed” on the system “from above,” by a programmer operating outside of the system. Dennett’s mental memes are neurally implausible, a fossil left over from an older intellectual era.

I’ll let Bleckner et. al have the last word. Again from p. 13:
Language evolves far away equilibrium as other complex systems do. As we define language primarily through dynamical rules, rather than by forces designed to pull it to a static equilibrium, it shares, along with almost all complex systems, a fundamentally far - from - equilibrium nature. An open system continues to change and adapt as its dynamics are ‘fed” by energy coming into the system, whereas a closed system will reduce to a stable state or equilibrium (Larsen - Freeman & Cameron, 2008).

* * * * *

* The ‘Five Graces Group’: Clay Beckner, University of New Mexico; Richard Blythe, University of Edinburgh; Joan Bybee, University of New Mexico; Morten H. Christiansen, Cornell University; William Croft, University of New Mexico; Nick C. Ellis, University of Michigan; John Holland, Santa Fe Institute; Jinyun Ke, University of Michigan; Diane Larsen - Freeman, University of Michigan; Tom Schoenemann, James Madison University.

No comments:

Post a Comment