Pages in this blog

Monday, November 23, 2015

Commensurability, Meaning, and Digital Criticism

What do I mean by “commensurate”? Well…psychoanalytic theory is not commensurate with language. Neither is semiotics. Nor is deconstruction. But digital criticism is, sorta. Cognitive criticism as currently practiced is not commensurate with language either.

Let me explain.

In the early 1950s the United States Department of Defense decided to sponsor research in machine translation; they wanted to use computers to translate technical documents in Russian into English. The initial idea/hope is that this would be a fairly straightforward process. You take a sentence in the source language, Russian, identify the appropriate English words for each word in the source text, and then add proper English syntax and voilà! your Russian sentence is translated into English.

Alas, it's not so simple. But researchers kept plugging away at it until the mid-1960s when, tired of waiting for practical results, the government pulled the plug on funding. That was the end of that.

Almost. The field renamed itself and became computational linguistics and continued research, making slow but steady progress. By the middle of the 1970s government funding began picking up and the DoD sponsored an ambitious project in speech understanding. The goal of the project was for the computer to understand “over 90% of a set of naturally spoken sentences composed from a 1000‐word lexicon” [1]. As I recall – I read technical reports from the project as they were issued – the system was hooked to a database of information about warships. So that 1000-word lexicon was about warships. Those spoken sentences were in the for of questions and the system demonstrated its understanding by producing a reasonable answer to the question.

The knowledge embodied in those systems – four research groups worked on the project for five years – is commensurate with language in the perhaps peculiar sense that I’ve got in mind. In order for those systems to answer questions about naval ships they had to be able to parse speech sounds into phonemes and morphemes, identify the syntactic relations between those morphemes, map the result into lexical semantics and from there hook into the database. And then the process had to run in reverse to produce an answer. To be sure, a 1000 word vocabulary in a strictly limited domain is a severe restriction. But without that restriction, the systems couldn’t function at all.

These days, of course, we have systems with much more impressive performance. IBM’s Watson is one example; Apple’s Siri is another. But let’s set those aside for the moment, for they’re based on a somewhat different technology than that used in those old systems from the Jurassic era of computational linguistics (aka natural language processing).

Those old systems were based on explicit theories about how the ear decoded speech sounds, how syntax worked, and semantics too. Taken together those theories supported a system that could take natural language as input and produce appropriate output without any human intervention between the input and the output. You can’t do that with psychoanalysis, semiotics, deconstruction, or any other theory or methodology employed by literary critics in the interpretation of texts. It’s in that perhaps peculiar sense that the theories with which we approach our work are not commensurate with the raw material, language, of the objects we study, literary texts.

About all we can say about the process through which meaning is conveyed between people by a system of physical signs is that it’s complex, we don’t understand it, and it is not always 100% reliable. The people who designed those old systems have a lot more to say about that process, even if all that knowledge has a somewhat limited range of application. They know something about language that we don’t. The fact that what they know isn’t adequate to the problems we face in examining literary texts should not overrun the fact that they really do know something about language and mind that we don’t.

Now, what about Siri and Watson? The type of research employed in those old speech understanding systems went on for about a decade and was replaced by a somewhat different methodology. This methodology dispensed with those explicit theories and instead employed sophisticated statistical techniques and powerful computers, operating on large data sets. These statistically based learning approaches first appeared in speech recognition and optical character recognition (OCR). The goal in these systems is simply to recognize the input in computer-readable terms. There’s no attempt at understanding, translating into another language, or answering questions. That’s come later.

The big task these days is to combine the two approaches, hand-coded theory-based knowledge, with statistical learning, into a single system. But that’s not where I’m going with this post. Where I’m going is that the statistical techniques employed in digital criticism are of a piece (and often the same as) the statistical techniques employed in OCR, speech recognition, and more ambitious systems such as Siri and Watson. The larger point is simply that digital criticism is, in the sense I’m employing the term, commensurate with language in a way that conventional criticism is not.

Digital criticism starts with the raw signifiers and that’s it. By analyzing large highly structured collections of raw signifiers (that is, collections of texts) these methods produce descriptions of those collections that give us (that is, human critics) clues about what’s going on in those texts. As far as I can tell, those clues could not be produced in any other way. It’s not as though digital critics are doing things that would be done better with an army of critics that we don’t have. Even if we had that army reading all those texts, how would they express their understanding of what they read? How could they aggregate their results? No, digital criticism is not a poor substitute for hordes of human readers; it’s something else, something new and different.

And those basic methods work only because they can make use of piles of signifieds that they do not understand or have theories about. We’re the ones with the theories and we use them to make sense of what our digital tools reveal, our digital telescopes if you will. To do that we’ve got to learn to think like sociologists, as Andrew Goldstone has remarked [2]:
Basically, I think we should situate quantitative methods in DH (which are currently going under names like “digital methods,” “distant reading,” and “macroanalysis”) in the context of one of the large-scale transformations of literary study since 1970 or so, its steadily growing and now dominant concern with the relation between the cultural, the social, and the political (let’s call it the cultural turn for short, though I don’t mean to identify the transformation of literary scholarship with the roughly contemporary historiographical shift of the same name). This turn is common knowledge, but it’s kind of fun to count it out, as I tried to do in the talk. 
One of the major challenges of the cultural turn has been the dubious relation between the handful of aesthetically exceptional texts literary scholars have focused their energies on and the large-scale social-historical transformations which have come to be the most important interpretive contexts for those texts. Do these texts tell us, as clues or symptoms, everything we can learn about the systematic relations between society and literature, or, for that matter, about the systematic development of literature considered just as a body of texts? Haven’t we had good reason, ever since the canon debates, to doubt the coherence and comprehensiveness of the body of texts professional scholars happen to value? The cultural turn itself, then, might motivate us to search for other methods than those developed for interpreting the select body of texts.
That’s one thing.

There’s another: When, if ever, will digital criticism approach the kinds of theory-based systems that were developed in computational linguistics back in the 1970s? That’s the kind of work I published in MLN in 1976 and on which my dissertation was based two years later [3]. I don’t have an answer to that question. But I do think that the latest pamphlet out of Stanford’s Literary Lab, On Paragraphs. Scale, Themes, and Narrative Form, is a small step in that direction [4].

* * * * *

[1] Dennis H. Klatt. Review of the ARPA Speech Understanding Project. The Journal of the Acoustical Society of America. 62, 1345 (1977); http://dx.doi.org/10.1121/1.381666

Here’s a final ARPA report evaluating the effort: Lea, Wayne A. and Shoup, June E. Review of the ARPA SUR Project and Survey of Current Technology in Speech Understanding. Final rept. 20 Jul 77-19 Jul 78: Report No. ADA066161. Download PDF: http://www.dtic.mil/get-tr-doc/pdf?AD=ADA066161

[2] Andrew Goldstone, Social Science and Profanity at DH 2014, Andrew Goldstone’s blog, accessed 23 November 2015. URL: http://andrewgoldstone.com/blog/2014/07/26/dh-soc/

[3] Cognitive Networks and Literary Semantics. MLN 91, 1976, pp. 952-982. Download URL: https://www.academia.edu/235111/Cognitive_Networks_and_Literary_Semantics

See also, William Benzon and David Hays, Computational Linguistics and the Humanist. Computers and the Humanities 10, 1976, pp. 265-274. Download URL: https://www.academia.edu/1334653/Computational_Linguistics_and_the_Humanist

William Benzon. Toward a Computational Historicism: From Literary Networks to the Autonomous Aesthetic. Working Paper, May 2014. Download URL: https://www.academia.edu/7776103/Toward_a_Computational_Historicism_From_Literary_Networks_to_the_Autonomous_Aesthetic

[4] Mark Algee-Hewitt, Ryan Heuser, Franco Moretti. On Paragraphs. Scale, Themes, and Narrative Form. Stanford Literary Lab, Pamphlet 10, October 2015. 22 pp. URL: http://litlab.stanford.edu/LiteraryLabPamphlet10.pdf

No comments:

Post a Comment