I no longer remember just who started it off, if I ever knew, but the Twitter convo went on and on and branched here and there and covered a lot of territory. Somewhere in there one of Scott Weingart’s tweets led me back to a post by Ryan Cordell and I started thinking. Here’s what I thought. It’s about the difference between macro-scale (a corpus) and micro-scale (individual texts) analysis, how we conceptualize the relations between them, and context.
Tweet and Response
Here’s the tweet where Scott Weingart that got me thinking:
"we don’t…understand precisely how corpus-scale phenomena make their meaning, or how those meanings relate back to codex-scale artifacts."— Scott B. Weingart 🤹 (@scott_bot) July 27, 2017
Something about that quote, which is from Cordell’s post, struck me as odd. After more than a little thought scattered over a couple of days I’ve come to conclude that the oddness centers on the phrase “how corpus-scale phenomena make their meaning”. Phenomena MAKE their meaning? Well, yes, if THAT’s what’s going on then it IS a puzzle.
Long before I’d gotten to that, however, I told myself a simple story, starting with the assertion that I think about such phenomena as being evolutionary in kind. Evolutionary phenomena typically involve large populations of entities interacting with one another over some period of time. In this case we’ve got populations of 1) words, tokens of which are organized into 2) books (codices), populations of which circulate among 3) a population of readers. The fate of those books depends on the preferences of readers.
Macro and Micro Scale
With this in mind, let’s take a look at Cordell’s post. Here’s the paragraph the contains the phrase Weingart quoted:
Most incisively, Bode shows how much “distant reading” work reconstitutes the primary assumption of close reading: “the dematerialized and depopulated understanding of literature in Jockers’s work enacts the New Criticism’s neglect of context, in a manner rendered only more abstract by the large number of ‘texts’ under consideration.” The problem may be, in other words, not that computational analysis departs from analog methods, but that we interpret the results of corpus-level analysis too much like we interpret individual texts. To be provocative, I might rephrase to say that we don’t yet as a field understand precisely how corpus-scale phenomena make their meaning, or how those meanings relate back to codex-scale artifacts.
Let’s set Bode’s concerns about context aside.
The central issue is whether or not we should drawing conclusions about corpus-level phenomena in the same way we draw conclusions about individual texts. Let’s take a specific example, Heuser and Le-Khac, A Quantitative Literary History of 2,958 Nineteenth-Century British Novels: The Semantic Cohort Method (2012). Much of the pamphlet is devoted to explaining how they were able to make a series of observations of their corpus indicating a shift from abstract to concrete vocabulary. That's a descriptive statement. For the purposes of this post I'm going to treat that process as a black box and take the description at face value.
What interests me is how they get from that descriptive statement to a (possible) explanation for it. Roughly speaking, very roughly:
1) Over the course of a century England’s population shifts from rural to urban. This is a macro-scale phenomenon.2) Following Raymond Williams, The Country and the City: At the micro-scale, patterns of social relations depend on local arrangements for living and working, with rural and urban areas having distinctly different patterns. In particular, people living in urban environments spend relatively more time interacting with strangers and casual acquaintances rather than intimates. People in rural environments spend more time with intimates.3) These different patterns of social arrangements are expressed in different ways of characterizing people, their thoughts, feelings and motives, and their interactions in fiction. This is a micro-scale phenomenon, happening at the level of individual readers and authors. People living among intimates are relatively comfortable with abstractions whereas those living among strangers need concrete language.4) The effect of these two micro-scale phenomena (2 and 3) in the context of the macro-scale population shift (1) is a macro-scale phenomenon visible at the level of the corpus, a shift from abstract to concrete vocabulary.
This argument thus moves between two scales of phenomenon, macro and micro, and two phenomenal ‘registers’, living arrangements and habits of mind.
That is indeed very different from how we interpret individual texts. There we have the text, on the one hand, and an interpretive scheme on the other, where both are constituted though ideas and affects. We match the text to the interpretive scheme and then employ the interpretive scheme to craft an interpretation.
Let’s consider a different example, Underwood and Sellers on literary standards, The Longue Durée of Literary Prestige (MLQ 2016). Here’s there abstract:
A history of literary prestige needs to study both works that achieved distinction and the mass of volumes from which they were distinguished. To understand how those patterns of preference changed across a century, we gathered two samples of English-language poetry from the period 1820–1919: one drawn from volumes reviewed in prominent periodicals and one selected at random from a large digital library (in which the majority of authors are relatively obscure). The stylistic differences associated with literary prominence turn out to be quite stable: a statistical model trained to distinguish reviewed from random volumes in any quarter of this century can make predictions almost as accurate about the rest of the period. The “poetic revolutions” described by many histories are not visible in this model; instead, there is a steady tendency for new volumes of poetry to change by slightly exaggerating certain features that defined prestige in the recent past.
That’s not what they were looking for, but it’s what they found. This century-long change is a macro-scale phenomenon. And their result is entirely descriptive. All they are telling us is that this is what happened with English-language poetry over the course of the 19th century. Underwood and Sellers offer the following provisional generalization (p. 336): “Diachronic change across a period seems to recapitulate the period’s synchronic axis of distinction.”
They don’t have an explanation for this phenomenon, but they do offer a bit of speculation (pp. 336-337):
It is actually odd that a model trained on 1845–69 sees works from the 1870s as more likely to be reviewed than the works it was trained on. We didn’t expect to see this, and we don’t want to claim that we understand why it happens. We might speculate, for instance, that standards tend to drift upward because critics and authors respond directly to pressure from reviewers or because they imitate, and slightly exaggerate, the standards already implicit in prominent examples. In that case, synchronic standards would produce diachronic change. But causality could also work the other way: a long-term pattern of diachronic change could itself create synchronic standards if readers in each decade formed their criteria of literary distinction partly by contrasting “the latest thing” to “the embarrassing past.” In fact, causal arrows could point in both of these directions.
These speculations about cause are at the temporal (more or less) micro-scale, whether 1) response of critics and authors to reviewers, 2) writers imitating current examples, or 3) readers reacting to the (immediate) past. Cumulatively such micro-scale actions result in a macro-scale stylistic shift.
What then of context, Bode’s concern? Just what does context mean when we’re dealing with a century-long run of texts from a single nation or language? Of course one century is different from another, as is one nation or language, so we’re not trading in the universal here. But still, context would seem to be a rather diffuse notion.
Let’s take a closer look.
For Heuser and Le-Khac a century-long population shift (people) provides the context for a century-long vocabulary shift (the population of words observed in the population of codexes). These two phenomena are linked by the microscale phenomena of face-to-face social interaction, on the one hand, and the reading and writing of books, on the other, where these two hands interact to shape minds. The local social world provides the context in which texts find their meaning.
For Underwood and Sellers the context would seem to be a ‘moving window’ of local interactions among readers, reviewers, and texts which produces the large-scale phenomenon. That is to say, as soon as we move away from stating the phenomenon we observe in our large population of texts and words, we have to think about local interactions among texts words and people. And if this seems rather vague and abstract that’s because, at this point, it is vague and abstract.
In time, and with care, reasoning, and good models, we can eliminate the vagueness. But the abstractness is not going to disappear. These are difficult problems.
And they are problems of a kind that literary criticism has not faced before. “Distant reading” is indeed different from “close reading”. We really cannot “interpret the results of corpus-level analysis” in the way that “we interpret individual texts.”
* * * * *
Frankly, what comes to mind is my friend Tim Perper’s characterization of biology as worlds within worlds within worlds. What we’ve got at the micro-scale are words, people, and codexes dancing around one another in mutual context. And the dance percolates throughout these populations and over time until we have the macro-scale phenomena of Big History. So I’ll leave you with another Weingart tweet:
A huge issue; one complexity theorists are frankly ahead of us on. They've models of how systems effect individuals who can't see the whole.— Scott B. Weingart 🤹 (@scott_bot) July 27, 2017