Tuesday, January 2, 2018

Natural language is irreducibly computational

Not so long ago I made three assertions:
The process whereby word forms, whether spoken, written, or gestured (signed), are linked to meaning/semantics is irreducibly computational.

A complete text is well-formed if and only if its meaning is resolved once the last word form has been taken up.

It is in this context that Roman Jakobson’s poetic function may be considered a principle of literary form.
This post is about the first of those assertions.

Why do I believe it? When I made the assertion I didn’t reason it out beforehand. I simply made the assertion, on an intuitive basis, if you will.

I regard that assertion as being not very different from the assertion that water is liquid or that a pine three is a plant. That’s just what kind of thing it is, computation. Unfortunately that process is not something we can observe directly – like water or pine trees. The solar system is a better source of examples, specifically, the idea that the earth moves around the sun and that the moon is a satellite of the earth. Those are not perceptually obvious. It takes quite a bit of observation, by multiple observers at many locations, and abstract mathematical reasoning about those observations to establish the plausibility of those relationships.

So it is, I’m suggesting, with word forms, meaning, and their linkage through computation. We have to reason indirectly, and abstractly, more so than about the moon. As far as I know, computation is our best current proposal about how word forms are linked to meaning – for all I know it may be our only proposal other than something like, you know, magic. In this context magic is sometimes called intention.

Of course, it’s not as though I haven’t been thinking about this for a long time, since graduate school and my immersion in computational linguistics with David Hays. Of course computational linguistics, by definition, uses a computational process to link word forms to meaning – though it doesn’t always involve meaning. More often we have simple parsing, where a syntactic structure is assigned to a string of word forms. That’s all well and good, but many of the systems created in computational linguistics were not intended to model the human process and, of those that are so intended, how do we know they’re correct or in what aspect they are correct?

We don’t know, not yet.

There are at least two issues:
1. What is computation, anyhow?

2. Just what is the scope of my assertion?
On the second, I mean only that process, the linking of word forms to semantic structures. There is also, of course, a process whereby word forms are identified in the speech stream, on the page, or in gestures. That process may be computational as well, but I’m not making that assertion. And there is the process whereby at least some semantic structures are linked to perception, apples, oranges, snakes, thunderclaps, mountains, and the like, things we can see, hear, smell, taste, touch, and so forth. Those processes may also be computational, but I’m not including them within the scope of that assertion.

More generally, I note that we can use computation to simulate anything we can specify is sufficient detail (of the right kind). But that doesn’t mean those phenomena are computational in kind. The simulation of an atomic explosion is very different in kind from a real atomic explosion; the same for simulating turbulent flow in a pipe, complex dynamics in neural circuits, or traffic queuing at a toll booth, and so forth. Simulations are one thing; the simulated processes, another.

Any computer model of that linguistic process, binding word forms to meanings, will necessarily be a simulation, for it will be realized in a digital computer. The human brain is not a digital computer. But, if my assertion is correct, it will be a simulation of a naturally occurring computational process – for I do including human language within the scope of natural phenomena.

That leaves us with the first question: What is computation? Alan Turing has provided us with one answer. But I’m not sure how useful that answer is for my purposes. If we can simulate anything on a digital computer, well then pretty much follows that we can specify a Turing machine that is LIKE that phenomenon. That’s not terribly interesting.

And in a way it’s a bit circular. As I recall Turing got his basic conception as an abstraction over what people do when making arithmetic calculations. And arithmetic calculation is, after all, a very specialized use of language. If we think of language as a natural phenomenon, then arithmetic harnesses it – to borrow a word from Mark Changizi ¬– for a culturally specified purpose, numerical calculation. So arguing that natural language is computational in Turing’s sense is a bit circular.

No, I’m fishing for something else.

In “Principles and development of natural intelligence” [1] David Hays and I specified five principles that have emerged over time in the course of evolution and that operate cumulatively. The fifth principle allowed clever apes to become human. We called it indexing:
The indexing principle is about computational geometry, by which we mean the geometry, that is, the architecture (Pylyshyn, 1980) of computation rather than computing geometrical structures. While the other four principles can be construed as being principles of computation, only the indexing principle deals with computing in the sense it has had since the advent of the stored program digital computer. Indexed computation requires (1) an alphabet of symbols and (2) relations over places, where tokens of the alphabet exist at the various places in the system. The alphabet of symbols encodes the contents of the calculation while the relations over places, i.e. addresses, provide the means of manipulating alphabet tokens in carrying out the computation. The token at a place is a value and the place is identifiable by way of the relation given an address (see Fig. 13). Thus, the structure of the computational space can be used to locate various content items¬–that is, indexing. The possibilities of indexed computing become particularly exciting when one realizes, as von Neumann did, that values and places can be encoded in the same alphabet, making it possible to introduce the manipulation of computational geometry into the content of computation.
Indexing sounds an awful lot like what goes on in binding word forms to meanings, where the word forms are the indexes for the meanings.

Computation thus seems fundamental to human nature. It is the inner structure of language and it is what makes us human.

* * * * *

[1] William Benzon and David Hays, Principles and Development of Natural Intelligence, Journal of Social and Biological Structures, Vol. 11, No. 8, July 1988, 293-322. https://www.academia.edu/235116/Principles_and_Development_of_Natural_Intelligence

* * * * *

Whoops! been there, done that: On the binding of word forms to structures of meaning: A quick note on computing in the mind.

3 comments:

  1. How would iconicity fit into this scheme of indexing? No language is completely symbolic, none are completely iconic. Diagrammatical iconicity utilizes the internal geometry of the phonological system to quantize the world of expressible meanings. In some languages it is very robust, while in others you'd be hard-pressed to find it at all.

    ReplyDelete
    Replies
    1. I'm not sure what the problem is. Anything can serve as an index.

      Delete
  2. It would be an issue if you viewed language specificity as a constraint here?

    Anything can serve as an index but alteration would entail a descriptive change and difference.

    Construction/ descriptive questions and issues. Demonstrate how the position holds given the constraint and differences made.

    I think.



    ReplyDelete