Pages in this blog

Tuesday, September 25, 2018

Sean Carrolll interviews David Poeppel about language and the brain

Sean Carroll is a physicist at CalTech and Poeppel studies the brain, language, and cognition at NYU and the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany.

0:11:02 David Poeppel: [...] Now, the incuence of Chomsky was to argue, in my view, successfully that you really wanted to have a mentalist stance about psychology. And he had a lot of very interesting arguments. He also made a number of very important contributions to computational theory, to computational linguistics, and obviously to the philosophy of mind. [...]

0:11:58 DP: But he is... He’s dincult. I actually just gnished a chapter, a couple of... Last year’s so-called “The Incuence of Chomsky on the Neuroscience of Language,” because many of us are deeply incuenced by that. The fact of the matter is, his role has been both deeply important and moving and terrible and it’s partly because he’s so relentlessly un-didactic. If you’ve ever picked up any of his writings, it’s all about the work. He’s not there to make it bite-size and fun. He assumes a lot, a lot of technical knowledge and a lot of hard work. And so if you’re not into that, you’re never gonna get past page one because it is technical. But that’s made it very difficult because it seems obscurantist to many people.

0:12:40 Sean Carroll: It’s always very interesting, there are so many gelds where certain people manage to have huge outside influences despite being really hard to understand. Is it partly the cache of the reward you feel when you finally do understand something difficult?

0:12:57 DP: I wish that were true. That would mean a lot of people would read, let’s say, my boring papers. But I think in the case of Chomsky, it’s true because there are supergcial misinterpretations and misreadings that are very catchy. The most famous concept is language is innate. Now, such a claim was never made and never said. It’s much more nuanced. It’s highly technical. It’s about what’s the structure of the learning apparatus, what’s the nature of the evidence that the learner gets. Obviously this is a very sophisticated and nuanced notion, but what comes out is, “Oh, that guy’s claim is language is innate,” and that’s...
On understanding technical work:
0:14:56 DP: But, yeah, sometimes, especially in, let’s say, technical disciplines, you have to do the hard work and you can’t cut any corners. You have to actually get into the technical notions and what are the presuppositions, what are the, let’s say, hypothesized primitives of a system, how do they work together to generate the phenomena we’re interested, and so on. The concept of generative grammar, it’s a notion of grammar that’s trying to go away from simply a description or list of factoids about languages, and trying to say, “Well, how is it that you have a gnite set of things in your head, a gnite vocabulary, and ostensibly a gnite number of possible rules, maybe just one rule, who knows, but you can generate and understand an ingnite number of possible things?” It’s the concept that’s typically called “discrete ingnity.”

0:15:43 SC: Yep, okay.

0:15:44 DP: That’s a cool idea, and the idea was to work it out. Well, if that’s the parts list, how can you have a system and how can you learn that system, acquire that system? How can that system grow in you, and you become a competent user of it? That’s actually subtle, and you can’t just...
The speech signal:
0:25:08 DP: Our conversation, if you measured it, the mean rate of speech, across languages by the way, it’s independent of languages, it’s between 4 and 5 Hertz. So, the amplitude modulation of the signal... The signal is a wave. You have to imagine that any signal, but the speech signal, in particular, is a wave that just goes up and down in amplitude or, informally speaking, in loudness. The signal goes up and down, and the speech signal goes up and down four to gve times a second.

0:25:35 SC: What does the speech signal mean? Is this what your brain waves...

0:25:38 DP: The speech signal is the stuff that comes out of your mouth. I’m saying, “Your computer is gray.” Let’s say that was two and a half seconds of speech. It came out of my mouth as a wave form, and that’s the wave form that vibrates your eardrum, which is cool. But if you look at the amplitude of that wave form, it’s signal amplitude going up and down. It’s actually now... This is not debated, it’s four to gve times per second. This is a fact about the world, which is pretty interesting. So, the speech rate is... Or the so-called modulation spectrum of speech is 4 to 5 Hertz. [...] 
Music, incidentally, has a modulation spectrum that’s little bit slower. It’s about 2 Hertz. That’s equivalent to roughly 120 beats per minute, which is pretty cool. [...] If you take dozens and dozens of hours of music, and you calculate what is the mean across different genres, what is the mean rate that the signal goes up and down, I’s 2 Hertz.
We've got better descriptions of what the brain is doing, explanations...:
0:31:39 DP: And partly, that has to do with what do we think is an explanation. And that’s a very complicated concept in its own right, whether you’re thinking about it from the point of view of philosophy, the sciences, an epistemological idea. But we do have, let’s say... What we have for sure is better descriptions, if not better explanations. And the descriptions have changed quite a bit. I don’t know what the time scale is that would count as success, but I think it’s... We can sort of say that we’ve had the same paradigm for... We’ve had the same neurobiological paradigm for about 150 years since Broca and Wernicke, since the 1860s actually. A very straightforward idea...
The dual stream model of speech:
0:42:52 DP: [...] So, we reasoned that maybe the brain capitalizes on the same computational principles. You have one stream of information that says, “Look, what I really need to do is figure out what am I actually hearing, what are the words, how do I put them together, and how do I extract meaning from that.” And you have another stream that really needs to be able to deal with, “Well, how do I translate that into an output stream?” Let’s call it a “how” stream or an articulatory interface.

0:43:33 DP: Why would you do such a thing? Well, let’s take the simplest case of a word. And so what’s a word? Word is not a technical concept, by the way. Word is an informal concept, as you remember from your reading of Steve Pinker’s book. The technical term here would be morpheme, the smallest meaning-bearing unit. But we’ll call them words, words roughly correspond to ideas. So, what is a word? You have a word that comes in. My word that comes into your head now is, let’s say, “computer.” And as it comes in, you have to link that sound gle to the concept in your head. It comes in, you translate it into a code we don’t know, let’s say Microsoft brain, and that code then gets linked somehow to the file that is the storage of the word “computer” in your head. Now, in your head somewhere there’s a file, an address, that says the word “computer,” what it means for you, like, “I’m on deadline,” “Oh, this file,” or, “Goddamn, my email crashed.” But there’s many other things. So, you know what it means, you know how to pronounce it, you know a lot about computers, but you also know how to say it. So, it also has to have an articulatory code.

0:44:43 DP: Now, here comes the rub. The articulatory code is in a different coordinate system than all the other ones, because it’s in the motor system. It’s in basically time and motor... People call it joint space, because you move articulators, or you move your jaw, your tongue, your lips. So, the coordinate system that you use as a controller is quite different than the other ones. You have to have areas of the brain that go back and forth seamlessly and very quickly, because speech is fast, between an articulatory coordinate system for speaking and an auditory coordinate system for hearing. And some coordinate systems yet unspecifed, which we don’t understand at all, for meaning. You’re screwed. So, even something banal, like knowing the word “computer,” or “glass,” or “milk,” is already a deeply complicated theoretical problem.
Predictive computation:
0:55:25 DP: I think the notion that it’s entirely constructed, that you need a computational theory using the word “computational” but loosely, I think, is now completely convincing. That the way we do things are predictive, for instance, that most of what we do is a prediction, that the data that you get are vastly under-determined, the percepts and experience that you extract from the initial data, and so that it’s a glling in process. So, you take underspeciged data, that are probably noisy anyway, and then you build an internal representation that you use for inference and you use for action. Those are the two things you presumably wanna do most. You wanna not run into things and you wanna think about stuff.
And so it goes. 

Bonus: Sean Carroll on the problem of having an original idea:
0:52:18 SC: The problem is that people who sound like complete nut jobs... If you do have a tremendously important breakthrough that changes the world, you will be told you’re a complete nut job. But most people who sound like complete nut jobs do not have tremendously important breakthroughs that will change the world.

3 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. [fi] seem all converted to [g] in the article.

    ReplyDelete
    Replies
    1. Yes. I've seen that sort of thing in a number of articles.

      Delete