Pages in this blog

Tuesday, December 10, 2013

Three Notes on Literature, Form, and Computation

This post is a concatenation of three older posts, all dealing with (the idea of) the actual process of computation. These notes are intended for people who find the idea of literary form as computational form somewhere between odd and morally repulsive. If yo lean toward the repulsive end of that continuum there's probably nothing I can say that will change your mind, but these comments are relatively brief. Read them; what do you have to lose? If you're more inclined to find the conjunction of literature and computing merely odd – perhaps even edging over into intriguing (rising tone), then these notes should be useful.

Computing = Math, NOT 

Everyone knows that computers are about math. And that may be one source of humanistic resistance to computational research techniques, especially in the use of corpus technique for examining large bodies of texts of historical or literary interest. So: Computers are math, math is not language, literary texts ARE language; therefore the use of computer in analyzing literary texts is taboo as it sullies the linguistic purity of those texts. QED

Except that digital computers aren’t about math, at least not essentially so. To equate computers with math is to mis-identify computing with one use of computing, the calculation of numerical values. That equation also mis-identifies mathematics with but one aspect of mathematics, numerical calculations.

* * * * *

The contrast between math and language is, of course, deeply embedded in the American educational system. In particular, it is built into the various standardized tests one takes on the way into college and then, from there, into graduate school. One takes tests that are designed to test verbal abilities, one thing, and mathematical abilities, a different thing. And, while some people score more or less the same on both, others do very much better on one of them. The upshot is that it is easy and natural for us to think in terms of math-like subjects and verbal-like subjects and people good at either but not necessarily both.

The problem is that what takes place “under the hood” in corpus linguistics is not just math (statistics) and natural language (the texts). It’s also and mostly computation, and computation is not math, though, as I said up top, the association between the two is a strong one.

When Alan Turing formalized the idea of computing in the idea of an abstract machine, that abstract machine processed symbols—in a very general sense of both "symbol" and "process". That is, Turing formalized computation as a very constrained linguistic process.

Sets of symbols and processes on them can be devised to do a great many things. Ordinary arithmetic is one of them. To learn arithmetic we must first memorize tables of atomic equivalences for addition, subtraction, multiplication and division. Thus:
1 + 1 = 2
1 + 2 = 3
1 + 3 = 4
. . .
9 + 7 = 16
9 + 8 = 17
9 + 9 = 18
And so on through subtraction, multiplication, and division. To these we add a few simple little recipes (aka algorithms) for performing calculations by applying these atomic equivalences to given arrangements – the arrangements themselves are part of the process – of numbers.

What we do when we do arithmetic, then, is we manipulate those symbols in very constrained ways. Those symbols are mathematical by virtue of the conventions that link them to the world, as counts of objects or units of measure of this or that sort (length, temperature, weight, etc.).

And just what is mathematics? Is Euclidean geometry math? Of course it is. Is it numerical? Fundamentally, no. But then Descartes came along and created conventions by which geometric operations can be achieved through arithmetic means. And…well, I’m not a mathematician, nor a philosopher of math, nor an expert in the theory of computing. But at this moment the question of the relationship between computing and mathematics is looking rather subtle and complex, interesting if you will, and not something that can be summed up by the common association between computers and mathematics.

Cognitivism and the Critic 2: Symbol Processing

It has long been obvious to me that the so-called cognitive revolution is what happened when computation – both the idea and the digital technology – hit the human sciences. But I’ve seen little reflection of that in the literary cognitivism of the last decade and a half. And that, I fear, is a mistake.

Thus, when I set out to write a long programmatic essay, Literary Morphology: Nine Propositions in a Naturalist Theory of Form, I argued that we think of literary text as a computational form. I submitted the essay and found that both reviewers were puzzled about what I meant by computation. While publication was not conditioned on providing such satisfaction, I did make some efforts to satisfy them, though I’d be surprised if they were completely satisfied by those efforts.

That was a few years ago.

Ever since then I pondered the issue: how do I talk about computation to a literary audience? You see, some of my graduate training was in computational linguistics, so I find it natural to think about language processing as entailing computation. As literature is constituted by language it too must involve computation. But without some background in computational linguistics or artificial intelligence, I’m not sure the notion is much more than a buzzword that’s been trendy for the last few decades – and that’s an awful long time for being trendy.

I’ve already written one post specifically on this issue: Cognitivism for the Critic, in Four & a Parable, where I provide abstracts of four texts which, taken together, give a good feel for the computational side of cognitive science. Here’s another crack at it, from a different angle: symbol processing.

Operations on Symbols

I take it that ordinary arithmetic is most people’s ‘default’ case for what computation is. Not only have we all learned it, it’s fundamental to our knowledge, like reading and writing. Whatever we know, think, or intuit about computation is built on our practical knowledge of arithmetic.

As far as I can tell, we think of arithmetic as being about numbers. Numbers are different from words. And they’re different from literary texts. And not merely different. Some of us – many of whom study literature professionally – have learned that numbers and literature are deeply and utterly different to the point of being fundamentally in opposition to one another. From that point of view the notion that literary texts be understood computationally is little short of blasphemy.

Not so. Not quite.

The question of just what numbers are – metaphysically, ontologically – is well beyond the scope of this post. But what they are in arithmetic, that’s simple; they’re symbols. Words too are symbols; and literary texts are constituted of words. In this sense, perhaps superficial, but nonetheless real, the reading of literary texts and making arithmetic calculations are the same thing, operations on symbols.

Arithmetic as Symbol Processing

I take it that learning arithmetic calculation has two aspects: 1) learning the relationship between primitive symbols, such as numerals, and the world, and 2) learning rules for manipulating those symbols. Whatever is natural to the human nervous system, arithmetic is not. Children get a good grip on their native tongue little or no explicit teaching; it just comes ‘naturally.’ But it takes children hundreds if not thousands of hours to become fluent in arithmetic. It is not natural in the sense the language, natural language, is.

In what is known as the Arabic notation, we have ten primitive symbols for numerical values: 1, 2, 3, 4, 5, 6, 7, 8, 9, 0. That last is a real puzzler and has been regarded as evil at various times and places: How can something, even a mere mark, represent nothing? Children learn the meaning of these symbols by learning to count collections of objects, real objects (e.g. blocks, pebbles, buttons, whatever), but also objects represented by pictures on a page.

And we have four primitive symbols for operations (+ - × ÷) of which the first two, for addition and subtraction, are the most basic. Children learn their meaning both through manipulating collections of objects (or their visual representations) and through rules of inference.

To be sure that’s not what we call them, but that’s what they are. We call them the addition and subtraction and multiplication and division tables. Each entry in these tables contains a single atomic fact of the form: string1 = string2. String1 always consists of an operator (+ - × ÷) between two strings of numerals where each string has one or two numerals. String2 always consists of a string containing one numeral or two. To solve an arithmetic problem we must use these atomic facts to make simple inferences. For example:
1 + 1 = 2
5 – 3 = 2
3 × 4 = 12
8 ÷ 2 = 4
And then there are procedures for more complex cases. My point is simply that arithmetic calculation is symbol processing. Always. Through and through.

Computation: Selection and Combination

The Structuralists talked about the axis of combination and the axis of selection. They were talking about language and, by implication, things that could be analogized to language. But their inspiration is mathematical, as we can see with a bit of elementary algebra. Here’s a simple equation:
x + y = z
The horizontal IS the axis of combination. Here we’re combining three variables, x, y and z, and two operators, + and =. The value of z depends on the value of x and y; it is thus a dependent variable. The values of x and y are not dependent on the values of anything else in this form, and so they are independent variables. The purpose of the form is to tell us just how the value of the dependent variable is related to the values of the independent variables.

The axis of selection is, in effect, the source of values for those variables. Let us say that it is the positive integers plus zero: 0, 1, 2, 3, 4 . . . Now we can select values for x and y along that axis and come up with values for z by using the various rules and procedures of elementary arithmetic. So:
7 + 4 = z: z must be 11
13 + 9 = z: z must 22
4 + 8 = z: z must be 12
And so forth.

Now, consider these expressions from linguistics, where we can use → instead of =, but to a similar effect:
S → NP + VP
NP → N
NP → det + N
VP → V + NP
Those are rules of combination and they are defined in terms of variables: S = sentence, N = noun, NP = noun phrase, V = verb, VP = verb phrase, and det = determiner. Given just those rules, we can generate these forms, among others, for proper sentences:
1) N + V + N
2) N + V + det + N
3) det + N + V + N
To have actual sentences we need to put words into those variables. For example, we can select from these words, among many others:
Nouns: John, boy, Mary, girl, candy, ball
Verbs: like, hit
Determiners: a, the
Buy choosing from the appropriate selection sets we get these sentences, which I’ve indexed to our forms, 1, 2, and 3:
1a) John likes candy.
1b) Mary likes John.
2a) Mary hit the ball.
2b) John hit a girl.
3a) The boy likes candy.
3b) A girl hit John.
For the past half-century linguists have studied syntax from this point of view, broadly speaking — though some will no doubt tell you that I’m now speaking too broadly. There are, in fact, various schools of thinking about syntax and related topics, and they are not mutually consistent. Differences between these schools are deep and, when contested at all, are fiercely contested. But mostly the different schools ignore one another.

And What of Meaning?

Ah, yes, what of it?

Here I would make a distinction between semantics and meaning. Meaning, it seems to me, is fundamentally subjective; that is, it arises only in the interaction of a subject and, in this case, a text (whether written or spoken). Of course, people can communicate with one another and thereby share meanings; and so meaning can be intersubjective as well.

Semantics, on the other hand, is not subjective. To be sure, I’m tempted to say that semantics has to do with the meaning of words, but that would sink me, would it not? If I’ve already said that meaning is subjective, then why would I attempt to assert that semantics IS NOT subjective?

Because it isn’t. Semantics, properly done, is as dumb as rocks. I am thinking of semantics as a domain of study, a topic within linguistics, psychology, philosophy, and computer science. In those contexts it is not subjective. Those investigations may not be fully satisfactory, indeed, they are not; but they are not subjective. Each line of thought, in its own way, objectifies semantics, that is, roughly speaking, the relationship between words and the things and situations to which they (can) refer.

And various computer models of language are among the richest attempts at objectified semantics we’ve got. It is one thing to observe that existing objectifications are inadequate. But one should not infer from that that better, indeed much better, objectifications are impossible in principle. That may be so, but that principle has not, to my knowledge, been demonstrated.

So, semantics is not understood nearly so well as syntax and – as I’ve already indicated – we have major disagreements about syntax. But I don’t think we need to understand semantics deeply and fully in order to assent to the weaker statement that, however it works, it involves symbol processing. That does not, as far as I’m concerned, imply that semantic processing is nothing but symbol processing.

Not at all. It is clear to me that the meaning of symbols is ultimately grounded in non-symbolic schemas, an idea that’s become associated with the notion of embodied cognition. David Hays and his research group (of which I was a member) pursued that line at SUNY Buffalo in the mid-1970s.* And computational investigations of non-processing have been on-going for years, with computer vision being the most richly developed.

And that makes my larger point, for if computation can encompass non-symbolic processing as well as symbolic processing, what else is there? I saying this I do not mean to imply that it’s all smooth sailing from here on out – just hop on the computational bus, weigh anchor, fire the jets, and bombs away hot diggity dawg!! Not at all. There’s still much to learn. In fact over the last half century it’s as though the more we’ve learned, the more we’ve come face to face with our ignorance. That’s how it goes when you’re exploring a new world.

And the only way you can explore this particular new world, is to think in terms of computation. Just how you do that thinking, that depends on your taste, inclination, imagination, and the problems you’re investigating. But, as far as I can tell, computation’s the only game in town.

* * * * *

* See, for example:
William Benzon, Cognitive Science and Literary Semantics, MLN, Vol. 91, pp. 952-982, 1976. 
David Hays, Cognitive Structures, HRAF Press 1981. 
William Benzon, Cognitive Science and Literary Theory, Dissertation, Department of English, SUNY Buffalo, 1978. 

Computing is a Physical Process

Norm Holland’s been hosting a conversation about literature at his Psychology Today blog. He started out discussing whether or not literature is a biologically adaptive and the discussion wandered into matters of meaning, as such conversations often do. In a comment, one TerryS brought up remarks Holland had made about Uri Hasson’s work using imaging studies of the brains of people watching films. Hassan found correlations between editing patterns in films and the structure of brain activity in viewers. Those patterns of brain activity were similar between viewers. Here’s Holland’s response:
You quote my reporting of Hasson's work with movies, however, seems to the point. In the chapter on Form in LITERATURE AND THE BRAIN, you'll find me agreeing that authors can constrain us by form. The clearest example is omission. What an author doesn't say, I can't respond to. I think Hasson's work with film editing shows that the edited form of film functions in the same way as say the ordering of chapters. The ordering and timing of film scenes, the framing of an image--these things do indeed constrain us.

But they constrain us at a purely physiological level, like the bicycle you bring in. That's not what people intend by literature "delivering" meanings.
My problem is in Holland’s next-to-the-last sentence, where he talks of a “purely physiological level, like the bicycle you bring in.” It’s not at all clear to me what that bicycle has to do with it, nor is it clear to me that, once we start talking about the brain, we’ve got anything but physiology.

It’s not clear to me just what we’re talking about when we talk about the meaning of a literary text, or a film, but if that meaning is closely related to what happens in the brain while we’re reading the text, or watching the film, then it is inextricably intermingled with those mere physiological constraints. The brain is a physical system; whatever happens in it is a physical process. To be sure, the process is a very subtle one, and probably chaotic in the technical sense that changes on the smallest scales (both spatially and temporally, such as molecules in a few synapses) can rapidly propagate to large scales (the whole brain over, say, several hours or maybe even days); but it is nonetheless a physical process.

This is a lesson I learned from the late David Hays, who was a computational linguist. He learned that lesson while pursing the difficult task of getting a computer to process natural language in a useful way. Much of the art in doing this, as in doing anything whatever with computers, is in managing the physical resources available to you, mostly time and memory. A faster machine will perform more operations per second, but, at any given moment, you’ve got to work with the machine you’ve got. The more memory you’ve got, the easier it is to keep partial results around for later use, but, as with speed, you’ve got to work with the machine you’ve got. You may have heard talk of ‘top-down’ and ‘bottom-up’ processing; well, those are strategies for managing resources, time and memory, in a way the exploits the structure of a given problem.

What, you might ask, has computational linguistics to do with reading literature or seeing movies? Well, of course, texts consist of language, and most movies have speaking roles, so there’s that. Beyond that I am assuming that, in some unspecified way, what happens in the brain is a computational process. This assumption is not an unusual one; it’s been common for over half a century now. It’s been common because many thinkers, some of the highest caliber, have seen in computation a way of implementing mind in matter.

That notwithstanding, the assumption, of course, is by no means universally accepted; you are free to reject it. But let me point out two things: 1) The notion of computation is very abstract and general. The fact that the brain is physically quite different from digital computers doesn’t necessarily imply that it isn’t computational in its operational nature. It just means that the it operates on different engineering principles. 2) If you reject any notion of computation as a solution to the problem of implementing mind in matter, you have to propose some other scheme. What is it?

It is for these reasons, then, that I reject Holland’s assertion that there is a ‘merely physiological’ level of operation (and explanation) that is irrelevant to considerations of ‘meaning.’ At the same time I will freely admit that we are a long way from delivering deep and useful models of literary or cinematic computation. But that’s a different issue, not irrelevant by any means, but not directly to my current argument, which is about basic principles.

By way of moving forward, let me offer a little computational ‘parable’ one that I told myself years and years ago. Consider the following arithmetic expression:
(1) 9 - 4 * 3
What’s the value of that expression? Given the conventional understanding of those symbols, the expression is ambiguous. Depending on your preference, either it has no value, or it has two different values. A simple way to disambiguate the expression is to add some formal structure by using parentheses:
(2) (9 - 4) * 3
(3) 9 - (4 * 3)
Now we have two unambiguous expressions. Expression 2 directs us to perform the subtraction first; it evaluates to 15. Expression 3 directs us to perform the multiplication first, yielding a value of -3. If the value of an arithmetic expression can be said to be its meaning, then formal structure, the order in which one performs operations, has a determining effect on the expression’s meaning.

It’s a long way from that humble little parable to a computational account of Coleridge’s “Kubla Khan,” a problem I’ve been doggedly pursuing off and on for over three decades, or a computational account of Emily Brontë’s Wuthering Heights, on which I’ve done a bit of work, examining its two-generation form in comparison to the two-generation form of Shakespeare’s The Winter’s Tale. And other texts, and some films as well. Some of that work is more obviously tied to computationally inspired formal considerations than others and, on the whole, I’m pleased with the body of work, scattered those it is.

But I can’t say that I’ve come up with any result that’s as clear and decisive, even if considerably more complex, than the above little arithmetic parable. And if you’re wondering why I’ve pursued the implications of that parable in the absence of definitive results, well, I have two things to say. First: Yes, no deep and smashing results. But I’ve made many interesting and satisfying observations along the way, managed to formulate some interesting questions, and have somehow managed to cobble together a coherent intellectual program centered on the study of form (which you may find in this downloadable paper).

Second: I don’t know what else to do. Much of the profession seems to be inching toward a conclusion I’d reached years ago:
The old methods are no longer yielding interesting results; we need new methods.
Where do we go for something new? Well there’s cognitive science, there’s evolutionary psychology, and there’s neuroscience. And if you poke into anyone of those deeply enough you’re going to run into computation. Cognitive science is what happened when computation, in one form or another, encountered linguistics, psychology, and philosophy. Evolutionary psychology is deeply indebted to game theory and computational models abound in the neurosciences.

Yes one can learn from these newer psychologies without confronting computation. But some of their deepest lessons are computational in form and inspiration. Sooner or later students of literature will become comfortable with computational thinking. And when that happens the distinction between physiology and meaning will loose its force.

2 comments:

  1. "9 - 4 * 3
    ...
    If the value of an arithmetic expression can be said to be its meaning, then formal structure, the order in which one performs operations, has a determining effect on the expression’s meaning.
    "

    well you assume that that is the meaning thus driving your exposition into a very reductionistic approach

    I reduce everything to a number; that's not reversible. From the meaning I can't go back to the "thing", -3 can be the result of infinite things

    To me the meaning of the expression is that i have three numbers and multiply the two and add them together and so on (in short i perform some operations on some numbers)

    The "structure" that you are implying is just the interpretation of the viewable. The structure is that i have 9 then - then 4 then * and then 3 in a sequence.

    Thus we are losing something.....

    ReplyDelete
  2. I reduce everything to a number; that's not reversible. From the meaning I can't go back to the "thing", -3 can be the result of infinite things

    That's pretty much true of ordinary literary interpretation as well. Whatever reading is proposed could, in fact, be motivated by any number of texts other than the one actually used to generate the reading.

    ReplyDelete