Pages in this blog

Tuesday, November 27, 2018

Receptive multilingualism, how fascinating!

In the latest The Atlantic, Michael Erard describes a fascinating linguistic phenomenon: "The Small Island Where 500 People Speak Nine Different Languages: Its inhabitants can understand each other thanks to a peculiar linguistic phenomenon".

The article begins:
On South Goulburn Island, a small, forested isle off Australia’s northern coast, a settlement called Warruwi Community consists of some 500 people who speak among themselves around nine different languages. This is one of the last places in Australia—and probably the world—where so many indigenous languages exist together. There’s the Mawng language, but also one called Bininj Kunwok and another called Yolngu-Matha, and Burarra, Ndjébbana and Na-kara, Kunbarlang, Iwaidja, Torres Strait Creole, and English.
None of these languages, except English, is spoken by more than a few thousand people. Several, such as Ndjébbana and Mawng, are spoken by groups numbering in the hundreds. For all these individuals to understand one another, one might expect South Goulburn to be an island of polyglots, or a place where residents have hashed out a pidgin to share, like a sort of linguistic stone soup. Rather, they just talk to one another in their own language or languages, which they can do because everyone else understands some or all of the languages but doesn’t speak them.

The name for this phenomenon is “receptive multilingualism”, something I'd never heard of before reading Erard's article. Upon first learning of receptive multilingualism, it seemed improbable. How could a community use as many as nine different languages? Upon giving it more thought, however, the probability that members of a group living in a zone where many languages are in daily contact would develop passive fluency in several of them began to make sense.
Mair then goes on to present examples of similar phenomenon from his own polyglot experience, mostly involving Chinese topolects plus English (e.g. Singapore). He concludes: "All of this leads me to conclude that, in general, passive recognition of languages is easier than active production, and that this holds true both with speech and writing."

One of the things that I learned as an undergraduate is that our passive or receptive vocabulary is larger than our active or productive vocabulary. That would seem to be in the same behavioral ball park. What I'd like to know is why? It seems to me that there has to be a simple and direct explanation, but I can't for the life of me come up with one. That explanation obviously needs to be couched in a good account of our linguistic mechanisms. And THAT's something we don't have. OTOH, this phenomenon seems to me to be a valuable clue about the workings of that mechanism.

What do the computational linguists know about the difference between parsing a sentence and generating one? Of course one can, in principle, parse a sentence without knowing what the words mean, and I believe that we've got parsers that do a pretty good job of this. To produce a sentence one starts with items of meaning (whatever/however they are) and assembles them into a coherent string.

Let's step back a second. On the receptive side, let's posit that the world is pretty much the same regardless of the language you speak (yeah, I know, Sapir-Whorf, but how strong is that, really?). And, however we order our inner semantic engine, there is coherence and order there. When listening to someone speak a familiar but foreign language, we don't have to supply the syntax; it's already there in the string. All we have to do is map the individual lexemes to our inner semantic store, perhaps with the help of a syntactic clue or three, and we understand. And if we're in conversation we can do a bit of back and forth to clarify things, even if our conversational partners are in the same situation as we are (i.e. they can understand us, but not speak our language). 

So, it's one thing to understand language strings, where syntactic order is there in the string. It's quite something else to produce them, where we have to create that order. Both cases presuppose that we've got usable mappings between lexical forms and semantic elements.

That's connected speech. But what about individual vocabulary items in our own language? Why does the range of words we can understand in context exceed the number we can actually use? Well, duh, because we actually HAVE a semantic context supplied to use when listening to speaking or reading written language. Again, its the context.

Now, what's the right set of technical details needed to pull this all together?

* * * * *

From Erard's article:
While Australia isn’t the only place in the world where receptive multilingualism happens, one thing that makes it different in Warruwi is that those receptive skills have a status as real proficiencies. Where the academic foreign-language field tends to see such skills as language half-learned, as an incomplete—or even worse, failed—acquisition, at Warruwi a person can claim receptive skills in a language as part of their repertoire. The Anglos in Texas aren’t likely to put “understands Spanish” on a job résumé, while the immigrant children might be embarrassed that they can’t speak their parents’ languages. Another difference is that people at Warruwi Community don’t see receptive skills as a path to spoken abilities. Singer’s friend Richard Dhangalangal, for example, has lived most of his life with speakers of Mawng, which he understands very well, but no one expects him to start speaking it.

Receptive multilingualism has been institutionalized in some places. In Switzerland, a country with four official languages (from two different language families), receptive multilingualism has been built into the educational system, such that children learn a local language, a second national language, and English from an early age. In principle, this should allow everyone to understand everyone else. But a 2009 study showed considerable monolingualism among Swiss citizens; Italian speakers tended to be the most multilingual and French speakers the least. Moreover, each group of speakers possessed strong negative attitudes about the others. Just as in Warruwi Community, social factors and ideas about language shape the life of many languages in places like Switzerland.

But even someone in Switzerland might consider the status of understanding-without-speaking in Australia to be no big deal, given that many people in Europe tend to have related languages in their repertoire (think Romance, Germanic, and Slavic languages). This allows them to draw on cognate vocabulary and grammatical structures for passive understanding. The languages from Warruwi Community, by contrast, come from six different language families and aren’t mutually intelligible, so those clusters of receptive skills amount to something quite sophisticated. Not enough is known about receptive multilingualism to know how many languages someone could understand but not speak.

2 comments:

  1. Incredible. I had never heard of anything like this anywhere in the world. Thank you for sharing.

    ReplyDelete