What does that first question even mean? In what sense is language computationally privileged? They may well be an abstract answer to that question, but I don’t have to skills to articulate it. So I’ll have to do so indirectly.
In the decade or so following World War II electronic digital computation developed around a few key problems: 1) data processing, e.g. tabulating census statistics, 2) dynamics, e.g. simulating atomic explosions, artillery calculations, and 3) machine translation. Not artificial vision, or hearing, or motor kinematics, or any other human sensory or motor activity. The problem was to translate texts from one natural language to another.
THAT’s computational privilege, but I mean privilege in perhaps a peculiar sense. Machine translation, of course, was important for reasons of national defense. It wasn’t just any language we wanted to translate from; it was Russian. But that’s not what I mean by privilege. That’s why the work was funded, but by privilege I mean something like tractable. Language was deemed computationally tractable.
Language is computationally tractable in a way that those other activities – seeing, hearing, moving the body – are not. Why? Because digital computing is itself a linguistic activity, albeit that languages involved are highly restricted and limited in a way that natural languages aren’t. Still, natural language is more like computer languages than seeing, hearing, and jumping rope are. That’s what I mean by computationally tractable.
And this point it gets a little tricky. What is arithmetic? Ordinarily we think of it as a kind of math, which is very different from language. Ordinarily, that’s so. But it’s also superficial.
How do we do arithmetic calculations? One can use a simple mechanical device, like an abacus. But we’re taught to do it with numerical symbols, for values zero through nine, plus a decimal point, plus four operators (plus, minus, times, divided by) and the equals sign. That, plus a bunch of simple rules, makes arithmetic a kind of simple language. There is thus a deep connection between language and arithmetic, and hence between language and mathematics.
That’s computational privilege.
Now, our second question: Why is literary form objectively knowable, but meaning not? Caveat: This is going to be quick and impressionistic, more of a conceptual placeholder than anything else.
Literary form belongs to the computable aspect of language. Word meaning, and hence the meaning of texts of any kind, including literary texts, ultimately depends on the world and our access to the world through the senses and the motor system. That access is, in principle, open-ended and undefined. It is not computable.
Word meaning, I submit, is pretty much like those elusive qualia that philosophers talk about. Perhaps we can think of it as the qualia of the mind. We can build and test models of visual perception, for example, and so investigate the relationship between objective characteristics of the visual scene and color perception, that is, we can build models of how qualia arise, but those models are models, not qualia themselves. If the model is implemented in a computational system attached to visual sensors, then it may actually created pseudo qualia, but real qualia require a living system. We don’t know how to create those.
The same is true for language, for meaning vs. semantics. We can build a semantic model, which is about how words and texts have meaning. But it IS a model, not meaning. (For more on the distinction between meaning and semantics, see my recent post, 2 Comments on Moretti’s LitLab 15: Patterns and Interpretation [#DH].)
Meaning and qualia are both ontologically and epistemologically subjective in Searle’s sense. Semantics models and sensory models are both ontologically and epistemologically objective in Searle’s sense [1]. (Also see my recent post, Objectivity and Intersubjective Agreement.)
* * * * *
[1] John R. Searle, The Construction of Social Reality, Penguin Books, 1995, pp. 5 ff.
No comments:
Post a Comment