In the 6th pamphlet from Stanford’s Literary Lab, “Operationalizing”: or, the Function of Measurement in Modern Literary Theory, Franco Moretti ended with a call to explicate the theoretical consequences of computing for literary study. That’s what I’ve been doing. It is now time to wrap up the exposition.
Let us begin with a passage from one of the last essays published by Edward Said, Globalizing Literary Study (PMLA, Vol. 116, No. 1, 2001, pp. 64-68). In his second paragraph Said notes: “An increasing number of us, I think, feel that there is something basically unworkable or at least drastically changed about the traditional frameworks in which we study literature“ (p. 64). Agreed. He goes on (pp. 64-65):
I myself have no doubt, for instance, that an autonomous aesthetic realm exists, yet how it exists in relation to history, politics, social structures, and the like, is really difficult to specify. Questions and doubts about all these other relations have eroded the formerly perdurable national and aesthetic frameworks, limits, and boundaries almost completely. The notion neither of author, nor of work, nor of nation is as dependable as it once was, and for that matter the role of imagination, which used to be a central one, along with that of identity has undergone a Copernican transformation in the common understanding of it.
What has happened to all those things, as Alan Liu has noted in “The Meaning of the Digital Humanities” (PMLA 128, 2013, 409-423) is that they have dissolved into vast networks of objects and processes interacting across many different spatial and temporal scales, from the syllables of a haiku dropping into a neural net through the process of rendering ancient texts into movies made in Hollywood, Bollywood, or “Chinawood” (that is, Hengdian, in Zhejiang Province) and shown around the world.
If it is difficult to gain conceptual purchase on the autonomous aesthetic realm, then perhaps we need new conceptual tools. Computation provides tools that allow us to examine large bodies of texts in new ways, ways we are only beginning to utilize. But computation also gives us new ways of thinking about the mind, and that is particularly important, and problematic, in this context.
The conceptual relations between minds and computing have been studied in great detail over past half century. It is exciting, speculative, and deeply problematic. I have no intention of rehearsing those controversies here. Let us set them aside for the moment. One doesn’t have to adopt the view, mistaken in my opinion, that brains are computers in order use the computer as a model for thought.
For what makes the autonomous aesthetic so elusive is that it is of the mind, the psyche, the soul if you will. Whatever its limitations, the computer, as idea, as abstract model, is the most explicit model we have for the mind. Or, to be more precise, it is the most explicit model we have for how a mind might be embodied in matter, such as a brain.
All real computation is embodied computation, whether it is in the brain of a school child doing sums or in an energy hogging data cruncher winning at chess or a prime time television game show. Some kinds of mechanisms are at work in minds, and the cognitive sciences have accumulated several decades’ worth of serious investigation into those mechanisms. If the Foucaultian épistème, for example, is more than a phantasm in the minds of post-structuralist thinkers, then it must have some kinds of operative mechanisms. Computing, in one form or another, provides our best tools for extending the limits of our knowledge of those processes and thereby, in the manner of Willard McCarty’s beloved via negativa, bringing us to the edge of new oceans of ignorance.
So, just what are the prospects of this computational historicism?
Early in my career I collaborated with David Hays, then my teacher, on a review of the literature in computational linguistics: Computational Linguistics and the Humanist, Computers and the Humanities Vol. 10, pp. 265-274, 1976. That essay began with parsers and the like and then moved to knowledge representation and discourse, such as the cognitive networks we looked at earlier in this series. Those models and techniques are all at what I have called the micro-scale. Our review culminated in a bit of fantasy, a computer simulation adequate to the task of “reading” Shakespeare in a way that usefully approximates what humans do. At that time, four decades ago, I expected that later in my career I would be working on such a simulation.
That did not happen. The technology simply didn’t develop in that way. The problems proved too difficult. Nor do I see any prospect for such technology developing in the foreseeable future.
One doesn’t need such technology in order to think about the difference between Amleth and Hamlet, between Shakespeare’s Lear and the struggles of Reverend Francis Wayland that Stephen Greenblatt has pondered. I believe we do know enough about language, computing, and thought to submit that difference to computational analysis. I believe that can be done with meso-scale models, of which Moretti’s networks are one example. That much along makes a new computational historicism thinkable.
But we have more than that now at our disposal. Within two decades after Hays and I had published our review techniques emerged for the statistical analysis of large bodies of text, techniques now providing much of the excitement in the digital humanities. That work has only begun.
The deepest results, of course, will come from combining all the techniques we can muster. We need computational techniques and models on macro, meso, and micro scales; some we have in hand, others will be developed. But we also need the patient analysis and description of texts that have been at the heart of our enterprise for the last century.
I hazard to predict where that will lead. We stand at the edge of the truly unknowable. While it has no name, we might do well to think of it as that autonomous aesthetic realm Said affirmed.