Erin S.Isbilen, Morten H. Christiansen, Nick Chater, It's about time: Adding processing to neuroemergentism, Journal of Neurolinguistics, Volume 49, February 2019, Pages 224-227, https://doi.org/10.1016/j.jneuroling.2018.04.005.
Linguistic exchanges occur in real time, on a moment-to-moment basis. The rapid rate of linguistic input (10-15 phonemes per second; Studdert-Kennedy, 1987), and its transience (50-100 ms; Elliott, 1962; Remez et al., 2010) pose a fundamental challenge to processing, with information being delivered at a rate that strains the limit of the human auditory threshold (∼10 non-speech sounds; Miller & Taylor, 1948). The additive effects of the linguistic signal's fast rate and fleeting nature are further exacerbated by the limitations of human working memory, which on average can retain no more than 41 (Cowan, 2001) to 7 [plus or minus] 2 items at a time (Miller, 1956). Together, these challenges form a Now-or-Never Bottleneck (Christiansen & Chater, 2016a,b): if input is not processed as soon as it is encountered, the signal is either overwritten or interfered with by new incoming material. In order to sustain linguistic functions, the cognitive system must overcome this bottleneck. Importantly, the Now-or-Never Bottleneck is not limited to linguistic processing. Rather, it extends to the perception of haptic (Gallace, Tan, & Spence, 2006), visual (Haber, 1983), and non-linguistic auditory input (Pavani & Turatto, 2008). Understanding how the cognitive system deals with this bottleneck can therefore provide fundamental insights into the emergence not only of language, but also of the other complex cognitive abilities discussed by HCRCSWY.And I'm guessing that it's the pressure of real-time processing that gives language its computational 'nature'.
The dynamics of how the linguistic signal unfolds in real-time underscores the importance of memory processes in considering how the cognitive system deals with the Now-or-Never bottleneck. Building on the basic memory process of chunking, Christiansen and Chater (2016b) suggest that the cognitive system engages in Chunk-and-Pass Processing to overcome the bottleneck. Using Chunk-and-Pass Processing, the cognitive system builds a multi-level representation of incoming input, by rapidly compressing and recoding the input into chunks of increasing levels of abstraction as soon as it is encountered. This process of compression and abstraction enables information to be held in memory for longer periods of time. To provide an example from language, the raw acoustic input may be chunked into syllables, syllables into words or multi-word phrases, and so on up to complex representations of the discourse. Throughout the multi-level process of chunking, top-down information driven by predictions from semantic, pragmatic and discourse expectations augmented by real-world knowledge will enrich the resulting representations. The reverse is hypothesized to happen during language production, with the intended message being broken down into chunks of increasing specificity. [...]
From the viewpoint of the Chunk-and-Pass framework, language acquisition involves learning how to process input – that is, learning how to effectively chunk linguistic input using top-down information in the face of the Now-or-Never bottleneck. Importantly, the real-time pressures from language processing not only shapes language acquisition, but also the cultural evolution of language itself. [...]
Similarly, the incorporation of multiple cues in natural language can also facilitate both the usefulness and learnability of linguistic structures. Because the Now-or-Never Bottleneck makes back-tracking very hard, the language system needs to rely on all available information to be right-the-first-time when chunking the input.
Though we must be careful here. I AM NOT asserting that computation is a basic or even the basic neural process. Rather, I am working within the scope of my conjecture that language processing is the most primitive form of computational process in the mind/brain. In particular, the mapping between linguistic form and meaningful content is where computation is necessary. What's computed, then, is the relation between chunks of form and chunks of meaning.
See my post, The Computational Envelope of Language, and the posts it cites as leading up to it.
No comments:
Post a Comment