Friday, December 13, 2013

On Deciding that Cognitivism is of Limited Value in Literary Criticism

Ripeness is all.
–Shakespeare

Research in artificial intelligence (AI) has always been torn between two poles: On the one hand there is the desire to understand how the human mind works. On the other hand there is a desire to do something of practical value. If we in fact knew how the human mind worked, these might not be so very different: Just code-up a model of the human mind and give it some practical job.

Alas, we don’t know how the mind works.

We know, for example, that our ability to read the newspaper depends on 10s of thousands of pieces of background knowledge we’ve got stored away, knowledge we’ve picked up throughout our lives. We also know something about coding such knowledge into software – there was a great deal of research on this a couple of decades ago – but it is very difficult, takes a lot of time, and, while it is interesting as a research project, it hasn’t proved terribly effective in practical applications.

Starting about three decades or so, however, AI researchers began developing statistical techniques that turned out to be very effective in producing practical results – provided, of course, that they have access to boatloads of data and lots of computing horsepower. Such software sorts the mail and handles routine phone messaging. It is also the kind of software that powers IBM’s Watson, the system that beat human Jeopardy champions.

While winning at Jeopardy makes for good TV, that particular task has little practical value. But the same software can be, and is being, trained to assist physicians in making diagnoses. That’s very practical indeed. But no one believes that Watson “thinks” like human beings do.

The Watson project was headed by David Ferrucci, who’s been interviewed a zillion times since Watson made its TV debut, and who recently made these remarks about his decision to side-line the search for the machinery of mind and, instead, to use computational techniques known to produce results:
“I have mixed feelings about this,” Ferrucci told me when I put the question to him last year. “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it”—he called Hofstadter’s work inspiring—“but where am I going to go with it? Really what I want to do is build computer systems that do something. And I don’t think the short path to that is theories of cognition.”
What interests me about that remark is that I made a similar decision, but I did it so that I could continue to think about how the human mind works.

That is, Ferrucci abandoned a certain approach to the human mind so he could do something of practical value. I abandoned that approach to the mind so that I could continue studying the mind by studying literature.

As I have explained in various pieces (here, for example: Two Disciplines in Search of Love), early in my career I immersed myself in the kinds of computational models that were intended to model the human mind. I got so far as to model the semantics of a Shakespeare sonnet; I even published two papers on that work (Cognitive Networks and Literary Semantics and Lust in Action: An Abstraction). But I never did another sonnet, much less a play or a novel. Why? Because it was just too hard for the specifically literary insight I got out of it. What I got out of that work is that, yes, there's something there, there really is. But, alas, it’s a looooongggg way off.

I’ve continued with that cognitive work off and on over the years because I find it intrinsically interesting. But I don’t expect it to pay off in fundamental insight into the workings of the literary mind. If it does so, well, that’s fine, but that’s not why I do the work.

I’ve reoriented my literary work around the practical analysis and description of literary texts and, of course, rationalizing and explaining that work. My most recent work on “Kubla Khan” (2003) and “This Lime-Tree Bower” (2004) is explicitly analytical and descriptive in character. While both of those essays contain highly theoretical (and speculative) material, that material is not about the mechanisms of language. My more recent work on ring form and center point construction is largely analytical and descriptive.

Would I like to know how the mind produces and understands such texts? Sure. But I don’t see how to get there from here. Whatever it is that’s going on in those texts, it’s too far from anything we understand in explicit (cognitive and computational) detail. Right now, the most important contribution I can make to our understanding such phenomena is producing good descriptions of the texts themselves and, of course, encouraging others to do so.

* * * * *

BTW, for a detailed formal argument about the limitations of cognitive science for literary study, see Frank Kelleter, A Tale of Two Natures: Worried Reflections on the Study of Literature and Culture in an Age of Neuroscience an Neo-Darwinism, (Journal of Literary Theory Vol. 1, No. 1, 2007, pp. 153-189).

No comments:

Post a Comment