Saturday, March 19, 2016

Bayesian reasoning and fictional characters

Last year Stanford's Arcade, Probability and Literary Being, by Hannah Walser and J.D. Porter. Starting from a remark Toni Morrison has made about how characters in her novels seem to take on a life of her own:
Hannah Walser: Cognitive literary studies has an explanation for this phenomenon ready to hand. Blakey Vermeule, for instance, claims that “the illusion of [characters’] independent agency” experienced by novelists has been speculatively linked both to “imaginary play in childhood … and to mind-reading capacities in general” (46-7). The idea is that our brains are evolutionarily primed to attribute mental states and intentions not only to humans themselves, but to humanlike entities—anthropomorphic cartoons, expressive robots, and yes, literary characters. When authors feel their characters directing them or resisting them or talking back, they’re experiencing a side effect of this overattribution of minds, just as readers are when they feel as though characters are real humans with lives outside the boundaries of the story.

JDP: The cognitive piece is crucial here, but I think this actually extends to an ontological question as well. We think of characters and fictional worlds as wholly manipulable just because they were invented; they are mere arrangements of technical features that the author could adjust in any way at all. But the suggestion that Morrison and others have made is that, once some of those technical features are in place, the author loses a little control. The characters attain some sort of actual ontological status related to the epistemological issues you raise. I think the explanation here has to do with probability, specifically Bayesian reasoning and its deployment of priors.

HW: Agreed—our judgment of others’ agency is as much a matter of probabilistic reasoning as it is of imaginative projection. When it comes to contemporary research on how we learn to understand other minds, Hume, not Kant, is the patron saint—sometimes even explicitly credited as such, by the developmental psychologist Alison Gopnik for instance (Gopnik 76). “Causal learning,” Gopnik notes, “is a notorious example of the gap between experience and truth” (75), most of all in the case of mental causes—desires, intentions, beliefs, and so on—which are by definition invisible and perhaps little more than hypothetical. But Gopnik’s idea, borne out by research into the learning processes of infants and small children, is that human reasoning about causality can be modeled according to Bayes’s concept of probability, which takes into account both the conditional probability of an event—for instance, the probability that it’s raining outside, given that I’m taking my umbrella to work—and the prior probability of the two events: that it’s raining, and that I would take my umbrella to work on any given day regardless of weather. (Wikipedia has a good breakdown of Bayesian basics.)

I’m going to explain this in detail, because Gopnik’s adaptation of Bayesian probability to social cognition strikes me as a persuasive theory of how we come to see other humans as intending agents with personalities: we notice that variations in behavior are often predictable on the basis of hypothesized individual preferences and beliefs rather than inherent features of the world.

No comments:

Post a Comment