I recently argued that Jakobson’s poetic function can be
regarded as a computational principle [1]. I want to elaborate on that a bit in
the context of the introduction to Representation’s
recent special issue on description:
Sharon Marcus, Heather Love, and
Stephen Best, Building a Better Description, Representations 135. Summer 2016. 1-21. DOI:
10.1525/rep.2016.135.1.1.
That article ends with six suggestions for building better
descriptions. The last three have to do with objectivity. It is the sixth and
last suggestion that interests me.
Rethink objectivity?
Here’s how that suggestion begins (p. 13):
6. Finally, we might rethink objectivity itself. One way to build a better description is to accept the basic critique of objectivity as impossible and undesirable. In response, we might practice forms of description that embrace subjectivity, uncertainty, incompleteness, and partiality. But why not also try out different ways of thinking about objectivity? Responsible scholarship is often understood as respecting the distinction between a phenomenon and the critical methods used to understand it; the task of the critic is to transform the phenomenon under consideration into a distinct category of analysis, and to make it an occasion for transformative thought. Mimetic description, by contrast, values fidelity to the object; in the case of descriptions that aim for accuracy, objectivity would not be about crushing the object, or putting it in perspective, or playing god, but about honoring what you describe.
OK, but that’s rather abstract. What might this mean,
concretely? They offer an example of bad description, though the fact that it is placed well
after a blank line on the page (p. 14 if you must know) suggests that it’s meant
to illustrate (some combination of) the six suggestions taken collectively
rather than only the last.
Here’s the example (taken from one of the articles in the
issue, pp. 14-15):
In her criticism of the objectivity imperative in audio description, Kleege explains that professional audio describers are instructed to avoid all personal interpretation and commentary. The premise is that if users are provided with an unbiased, unadorned description, they will be able to interpret and judge a film for themselves. Kleege writes, “In extreme instances, this imperative about absolute objectivity means that a character will be described as turning up the corners of her mouth rather than smiling.” For Kleege, reducing the familiar act of smiling to turning up the corners of one’s mouth is both absurd and condescending. The effort to produce an objective, literal account only leads to misunderstandings, awkwardness, and bathos. This zero-degree description is the paradoxical result of taking the critique of description, with its mistrust of interpretation and subjectivity, to one logical extreme. Tellingly, the professional audio describer’s “voice from nowhere” is not only weirdly particular; it also fails to be genuinely descriptive, since its “calm, controlled, but also cheerful” tone remains the same no matter what is being described.
OK, that makes sense. Let me suggest that smiles are
objectively real phenomena and so we don’t need to rethink objectivity in order
to accommodate this example.
To be sure, there are cases where one needs to know that
smiling means “turning up the corners of her mouth”. If one is investigating
the nature of smiles as communicative signals one might begin with that bit of description. But that
itself is not enough to differentiate between spontaneous smiles and deliberate
“fake” smiles – something that has been investigated I’m sure. In that contexts smiles cannot be taken at
face value, as it were.
But that’s not the context we’re dealing with. We’re dealing
with someone watching a film and describing what the actors are doing. We’re
dealing with (images of) the natural context of smiling, human communication.
Those configurations of facial features and gestures exist for other people and so it is appropriate, indeed it is necessary,
that we frame our descriptions in terms commensurate phenomena under
observation (“fidelity to the object”).
But what has that to do with describing literary texts?
Texts, after all, are quite different from human beings interacting with one
another.
Describing strings of characters
Texts are strings of characters – utterances may be
considered strings as well, but let’s bracket them out for ease of discussion.
Just as smiles are configurations of facial gestures directed at other people,
so texts are strings of characters directed at other people. By analogy it follows that, if we are describe texts in
terms that are faithful to object, we need to look for features ‘taken up’ by
readers.
That’s what linguists do in discussing syntax, for example.
Word order, punctuation, morphology, function words – these are taken as cues
to the underlying syntactic organization. It turns out, unfortunately, that
many different kinds of syntactic accounts are possible and linguists haven’t
figured out how to choose among them. If we knew how the mind worked, this
problem would disappear. We don’t, so the problem is with us. In fact, one
reason we’re interested in syntax is that we conceive of it as a royal road (to
borrow a figure from Freud) to understanding the mind.
What’s important is the principle: the structure of
character strings is a function of the ‘device’ that reads and writes them.
Those strings exist for that device.
The case of computer programming languages is different in that we know how the device works because we’ve designed and
built it. The physical device, the computer with its many chips, that is one
thing. We work with the device through two, three, or more levels of
programming language. What ultimately ‘runs’ on the machine is strings of ones and
zeroes (represented as extreme differences in current voltage–details, details). But that’s not what programmers produce. They produce strings in some
programming language which are then translated into some other language, which
may or may not consist of ones and zeros. If it doesn’t, that that language is, in turn, translated
into some other language, which may or may not consist of ones and zeros, until we finally reach those ones and zeroes.
So, we understand what’s going on because we’ve built
everything–shades of Vico’s verum factum. Except, alas, when it gets too big, we no longer understand what’s
going on [2].
In the past decade or so, however, AI workers have been
investigating so-called deep learning [3]. In those systems one programs the
computer to learn what’s going on in some set of example objects – they might
be images, or natural language strings, or games of Go, whatever. Once the
system has learned, it then does something ‘interesting’ in the object domain –
classifies objects in images, translates from one language to another, plays
Go, whatever. The people who program these learning systems, know how they
work. But what those systems have learned, the internal ‘code’ they generate
that allows them to function in the object domain, that is proving opaque to
the investigators who’ve created the learning systems. It’s as though the
machines have developed ‘minds’ of their own, which is the goal.
[So I think they’re actual minds? No. But that’s another discussion.]
But what’s this have to do with literary texts? Literary
texts are strings, just like individual sentences, though in most if not all
cases, texts are longer. To describe texts, analyze and describe the features
relevant to the minds that produce and read them. That’s what we do in
analyzing verse. That’s what students of poetics set out to do in the previous
century, narratologists too.
Yada yada. I could go on and on. But I don’t need to, not
now, because I’ve been doing that for the past decade or two.
References
[1] Jakobson’s poetic function as a computational principle,
on the trail of the human mind, New Savanna, accessed Sept. 28, 2017, http://new-savanna.blogspot.com/2017/09/jakobsons-poetic-function-as.html
[2] See, for example, my recent post, The problem with
software, New Savanna, accessed Sept. 28, 2017, http://new-savanna.blogspot.com/2017/09/the-problem-with-software.html
[3] The web is jammed with material on deep
learning. Here’s a site I’ve found useful: Deep Learning for Java, https://deeplearning4j.org/index.html
Addendum: This discussion continues in Objectivity and Intersubjective Agreement.
Addendum: This discussion continues in Objectivity and Intersubjective Agreement.
No comments:
Post a Comment