Tuesday, September 6, 2016

Why should literary critics learn “hard-core” computational linguistics or cognitive science?

I have, in a number of posts, explained that learning computational semantics was a tremendous intellectual experience for me. That was during graduate school in the 1970s. I ended up basing my dissertation (Cognitive Science and Literary Theory, 1978) on it, where I used Shakespeare’s Sonnet 129 as my primary example. And that was the last text where I attempted an end-to-end analysis using that kind of model.

How then can I argue that literary critics really ought learn that kind of modeling? If it’s not directly useful in the analysis of texts, what’s it good for? My answer to that question has depended on that word “directly.” Such models are complex and constructing one for, say, a Shakespeare play rather than a sonnet would just be impossibly complex. Can’t do it.

What about INDIRECTLY? That’s been my thought. Such models give you useful intuitions about language, and those intuitions can help in analyzing texts. In particular, not only do they give you a strong sense that language has a definite internal structure, but they give you a sense of how it works – even allowing for differences in details between different models. It’s not that literary critics don’t believe that language is highly structured; rather, they don’t have anything on which to hang that belief. And so they make do with a vague notion that language is a riot of signs pointing to and ricocheting off of one another.

That’s one thing, and I believe it. In a sense, I employ that (kind of) argument in my current draft, Form, Event, and Text in an Age of Computation, where I use it to motivate a three-way distinction between a text (taken as a physical object), the mind’s model of the world, and the path that text traces through the model. It’s a simple distinction, but one difficult to make, to VISUALIZE, without an explicit computational model.

Yesterday another line of thinking occurred to me. Rather than directing my argument at textual analysis, direct it at theory. There the use of explicit computational models can be direct. They are a good way of building theories.

Of course, there’s nothing new about that it idea. It’s foundational to the cognitive sciences. But literary cognitivists don’t go there, and I’ve complained about it. Here’s the thing: even if you don’t attempt a ‘full computational’ model, whatever that is (and I certainly didn’t attempt one in my dissertation work), learning how to construct such a model gives you a sense of, well, construction.

That’s missing in the literary cognitivists I’ve read. They just take a collection of cognitive models, dump them in a bag, and call it a model of the literary mind. This may be interesting in the small, in the accounts of the bits and pieces, but it’s not very useful. It’s mostly an excuse to read some interesting work in the cognitive sciences. And a little of that goes a long way.

And that, incidentally, is why I like Scott McCloud’s Understanding Comics (1993) so much. It gives you a sense of construction. The same with Braitenberg’s Vehicles: Experiments in Synthetic Psychology (1999). I discuss those books, and two others, in an old post: Cognitivism for the Critic, in Four & a Parable (the parable is about Simon’s ant, which, incidentally, can be taken as parable about the importance of context).

More later.

No comments:

Post a Comment