Pages in this blog

Wednesday, February 16, 2022

Some problems with Angus Fletcher’s account of neurons, logic, and narrative

A week or so ago a conversation on Twitter led me to this article:

Angus Fletcher, Why Computers Will Never Read (or Write) Literature: A Logical Proof and a Narrative, Narrative, Volume 29, Number 1, January 2021, pp. 1-28 DOI: https://doi.org/10.1353/nar.2021.0000

I read it with some care, highlighting, underlining, and even commenting in the margins, as one does.

I don’t know what expectations the title conjures up for you, but if you expect a lot of prose about computers, you’ll be disappointed. There is some, running in tandem with prose about nervous systems, but that’s only the last third of the article. The article’s initial sections sketch out a historical account of how academic literary studies reached their current state. It ranges over I. A. Richards, Aristotle, Francis Bacon, C. S. Pierce and various others and concludes that interpretive criticism is allied with semiotics, if not an actual species of it, and joins logic in the world of computers. I suspect that most literary critics will find this account rather inventive.

Fletcher hitches his wagon to Bacon and Eric Kandel (a Nobelist in neuroscience) to proclaim a science of literature. That’s where computers come in. Computers, being logical engines, will never be adequate to narrative. Narrative is about causality, which comes naturally to neurons. This is the part of Fletcher’s argument that interests me.

Nervous systems

Let’s jump right in (p. 16):

But there’s one feature of human learning that computers are incapable of copying: the power of our synapses to control the direction of our ideas. That control is made possible by the fact that our neurons fire in only one direction, from dendrite to synapse. So when our c synapse creates a connection between neuron A and neuron C, the connection is a one-way A → C route that establishes neuron A as a (past) cause and neuron C as a (future) effect. It’s our brain thinking: “A causes C.”

First, there’s the question of direction. Which direction is it? Earlier he had said (p. 15):

And when the synapse is triggered (typically by an electrical signal from the neuron’s axon), it carries a signal across the juncture, becoming the “middle” that connects the two neurons together.

Which direction is it, from axon through synapse to dendrite, or from dendrite to synapse? (For what it’s worth, Wikipedia’s entry about synapses says it’s the first.)

However, while Fletcher’s confusion on this matter is unfortunate, it is a relatively minor matter. What’s important is the fact of direction. Whether the direction is from axon through synapse to dendrite on the opposite, that doesn’t matter for his argument.

But just what does that get us? While the night, for example, infallibly follows the day, is that a causal relationship? Does night cause day, or, for that matter, day cause night? Is temporal order sufficient for causality? David Hume didn’t think so. All we actually have is correlation, reliable correlation to be sure, but still, it is only correlation. It is all well and good to assert (p. 16), “Cause-and-effect encodes the why of causation, while if-then encodes the that-without-why of correlation.” But Fletcher hasn’t told us how that is done. He just asserts, without further argument, that temporal order is sufficient to encode the why.

There is, however, a more fundamental problem with Fletcher’s argument. He seems to be assuming that the only way a brain (or a computer) can deal with causal relations in the world is if the causal process in the brain (or computer) is like the process in the world. Thus, since causes are prior to effects, the neural representation of the cause must be prior to the effect: “neuron A as a (past) cause and neuron C as a (future) effect”). A bit later Fletcher will assert (p. 17): “And since the human neocortex contains over 20 billion neurons (far more than any other species), our brains each possess tens of trillions of neocortical connections that can be stretched into long and branching chains of cause-and-effect.”

Is that so? Consider this well-known causal chain:

This is the farmer sowing his corn,
That kept the cock that crow’d in the morn,
That waked the priest all shaven and shorn,
That married the man all tatter’d and torn,
That kissed the maiden all forlorn,
That milk’d the cow with the crumpled horn,
That tossed the dog,
That worried the cat,
That killed the rat,
That ate the malt,
That lay in the house that Jack built.

Is Fletcher telling us that this sequence (with its mixture of causal and quasi-causal links between episodes) is carried in the brain by eleven neurons linked in order? What evidence does he offer that narratives are encoded in such a simple and direct fashion? Or is he merely suggesting that things might be so? Well, then, they might not. For what it’s worth, my own thinking has been strongly influenced by thinkers who see even relatively simple sensations, objects, and events as being encoded in populations of neurons (e.g. Walter Freeman). I do not believe that the nervous system encodes any narrative as a chain of neurons that mirrors the cause-and-effect chain in the narrative.

Computers

What does he say about computers? Here he is talking about the Arithmetic Logic Unit of a digital computers central processor (p. 16):

That unit (as we saw above) is composed of syllogistic logic gates that run mathematical equations of the form of “A = C.” And unlike the A → C connections of our synapses, the A = C connections of the Arithmetic Logic Unit are not one-way routes. They can be reversed without altering their meaning: “A = C” means exactly the same as “C = A,” just as “2 + 2 = 4” means exactly the same as “4 = 2 + 2,” or “Bob is that man over there” means exactly the same as “That man over there is Bob.”

Well, yes, it is true that “A = C” and “C = A” are equivalent. But Fletcher is talking about physical signals traveling in physical circuits. Here’s a stylized conceptual diagram of the circuitry of an Arithmetic Logic Unit that I got from Wikipedia. 

The OpCode indicates the logical operation to be performed on the operands coming in at the top to yield the result at the bottom. That’s a one-way circuit: A, B to Y. If direction of signal flow is all you need to for causal modeling, then it would seem that a computer’s ALU satisfies that condition.

Alas, now things get complicated. The hardware, the ALU, switches simple electrical signals. What those signals represent, however, is a function of the software the computer is running at the time. This distinction between hardware and software doesn’t exist for nervous systems. Nervous systems consist of living cells more or less fixed in place but which, as Fletcher noted, can modify the way each interacts with the others.

Digital computers are very versatile and have been programmed to function in many domains. The domain is encoded in the software and the data drawn upon and generated through the software. Whether or not a computer is simulating the causal interactions of light photons in a CGI rendering package, of atoms in protein folding, or the more modest ordering of alphanumeric characters in a word processor, the logic gates of the ALU switch the signals on which the software depends.

Yes, the ability of computers to generate coherent narrative is limited, though researchers have been working on it half a century. They’ve been working on reading narrative – for various senses of “read” – as long, with limited results. Will computers ever be able to deal with narrative as fluently as humans do? I doubt it, but I’m not inclined the blame the limitations on the operations of ALUs.

What’s a physical device? [Organized from the inside]

Moving on (p. 20):

But there’s a key difference between the synapse and the transistor. The synapse is a physical device, constructed from cellular proteins, that opens and shuts by adjusting its shape. The transistor, meanwhile, is an electronic device: the channel it gates is an electrical wire, and it itself is regulated by voltage. This means that the transistor can only function within a system that obeys the laws of electronics: the system’s current must remain within particular parameters, its overall circuit must be closed, its electrons must flow in precise patterns, etc.

Does Fletcher mean for us to believe that transistors are not physical, that electronic devices are...what? Immaterial spirit? Surely not. I would think he is making a contrast between two kinds of physical devices, call them natural and artificial, or perhaps organic and inanimate. And, just as transistors must obey the laws of electronics, so synapses must obey the laws of organic electro-chemistry. Why not say that? Why the misleading contrast between physical and electronic?

Fletcher concludes that paragraph with two short sentences: “So, the system cannot be improvised willy-nilly from within. It requires an overall, unified design.” First, it is by no means obvious to me that nervous systems lack an “overall, unified design.” I’d like to see that spelled out more carefully.

But it is the phrase “improvised willy-nilly from within” that caught my attention. For it speaks to one of my hobby-horses, the idea that minds are organized from the “inside” (in particular, see this post, Minds are built from the inside [evolution, development]). In contrast, computer programs are constructed from the “outside,” by programmers who, at least in principle, have access to the whole system. Computer programs are constructed by agents that are external to them and which can examine them in whole and part by part. No such agents exist for minds. Neurons, tightly coupled with one another, are the only agents involved (for a useful analogy, see my post, Busy Bee Brain).

I note, however, that recent machine learning programs, artificial neural networks in particular, are somewhat different. The programmers construct a learning architecture from the “outside,” like any computer program. That program then operates on (learns) a collection of data – often enough, a very large collection – and constructs a model of it. It is that model that runs during the system’s inference phase. That model is not constructed by the programmer; it is constructed by the system itself. From the “inside”? Perhaps so. I note that, just as the operations of real nervous systems are largely opaque to us, so are the operations of these learned models.

Where are we? [suggested reading]

As I have indicated above, I’m skeptical that computers will ever handle narrative as fluently has humans do. But I don’t find Fletcher’s argument at all convincing. It claims too much for neurons and muddles the operations of digital computers. As for the “logical proof” he mentions in his title, the best that can be said for it is that it is dubious and pretentious.

* * * * *

I will note as well that I have found the idea of computing quite useful in thinking about literature. I’ve explained this at some length in this article:

Literary Morphology: Nine Propositions in a Naturalist Theory of Form, PsyArt: An Online Journal for the Psychological Study of the Arts, August 2006, Article 060608, https://www.academia.edu/235110/Literary_Morphology_Nine_Propositions_in_a_Naturalist_Theory_of_Form

A somewhat more recent essay (with two examples, Shakespeare’s Sonnet 129 and Obama’s Eulogy for Clementa Pinckney):

Sharing Experience: Computation, Form, and Meaning in the Work of Literature, September 28, 2016, 221 pp. https://www.academia.edu/28764246/Sharing_Experience_Computation_Form_and_Meaning_in_the_Work_of_Literature

If you are interested in exploring the similarities and differences between brains and computers I suggest Grace Lindsey, Models of the Mind: How Physics Engineering & Mathematics Have Shaped Our Understanding of the Brain, Bloomsbury Books, 2021. Despite the formidable title, the book is very readable and requires no particular technical expertise. I have reviewed it in 3 Quarks Daily.

The article linked in the following tweet by Lindsey is technical. I’m parking it here for reference purposes.

Note that comparing the functioning of artificial neural nets and real nervous systems is now an active area of investigation.

No comments:

Post a Comment