Thursday, September 10, 2020

The neural code meets Sydney Lamb’s daughter

Back in 2004 science journalist John Horgan did a survey of thinking about the neural code:
Decode the Human Brain? Discover Magazine, October 29, 2004, https://www.discovermagazine.com/mind/the-myth-of-mind-control.
He up-dated that in 2016:
John Horgan, The Singularity and the Neural Code, Scientific American, Cross-Check blog, March 22, 2016, https://blogs.scientificamerican.com/cross-check/the-singularity-and-the-neural-code/.
His conclusion (2016):
Neuroscientists still have no idea what the neural code is. That is not to say they don’t have any candidates. Far from it. Like voters in a U.S. presidential primary, researchers have a surfeit of candidates, each seriously flawed.
But what, you ask, is this neural code? Here is what Horgan said in 2004, where he was referring to what some investigators thought we could achieve through brain-machine interfaces:
To achieve truly precise mind reading and control, neuroscientists must master the syntax or set of rules that transform electrochemical pulses coursing through the brain into perceptions, memories, emotions, and decisions. Deciphering this so-called neural code —think of it as the brain’s software – is the ultimate goal of many scientists tinkering with brain-machine interfaces.

The neural code is often likened to the machine code that underpins the operating system of a digital computer. Like transistors, neurons serve as switches, or logic gates, absorbing and emitting electrochemical pulses, called action potentials, which resemble the basic units of information in digital computers.
Therein lies the problem, the comparison of brain and digital computer. Everyone knows that the brain and the computer are quite different in many respects, but the metaphor persists because the computer, unlike the brain, is something we understand, but, like the brain, it is complex, and deals with information, or whatever it is.

I am reminded of a story Sydney Lamb tells on the first page of his book, Pathways of the Brain [1]:
Some years ago I asked one of my daughters, as she sat at the piano, “When you hit that piano key with your finger, how does your mind tell your finger what to do?” She thought for a moment, her face brightening with the intellectual challenge, and said, “Well, my brain writes a little note and sends it down my arm to my hand, then my hand reads the note and knows what to do.” Not too bad for a five-year old.
What if Lamb had followed up with various questions?
How does the brain write the note? Does it have an arm, hand, and fingers? Does it use a pen or a pencil? What kind of paper? Where does it get the paper? How many eyes does your hand have?
My guess is that she would quickly have become frustrated, confused, indignant and remarked “That’s silly”, or to that effect. She may not have been able to conceptualize what was wrong, that her dad was taking her metaphor far more literally than she intended, but she knew he was doing something wrong and that she did believe any of those nonsensical things.

Having told that story, Lamb goes  on to suggest that an awful lot of professional thinking about the brain takes place in such terms (p. 2):
This mode of theorizing is seen in ... statements about such things as lexical semantic retrieval, and in descriptions of mental processes like that of naming what is in a picture, to the effect that the visual information is transmitted from the visual area to a language area where it gets transformed into a phonological representation so that a spoken description of the picture may be produced....It is the theory of the five-year-old expressed in only slightly more sophisticated terms. This mode of talking about operations in the brain is obscuring just those operations we are most intent in understanding, the fundamental processes of the mind.
I agree with Lamb on this. To the extent that the hunt for the neural code is guided, either consciously or unconsciously, by the computer metaphor, it will fail. Thus one of Horgan’s informants remarked (2004):
The brain is “so adaptive, so dynamic, changing so frequently from instant to instant,” says Miguel Nicolelis, a neural-prosthesis researcher at Duke University, that “it may not be proper to use the term ‘code’.”
I would go one step further and change “may not” to “is not”. There simply is no neural code, not in any reasonable sense of the term code.

Let’s take a look at three differences between digital computers and brains. First, a basic digital computer has one active unit, the so-called central processing unit (CPU), and many passive units, for storage or memory. On the other hand, every unit in the brain, that is, every neuron, is active, and may be conceived of as a memory unit, albeit in a technical sense that’s a bit different from the common sense of the term. This contrast is one John von Neumann made in his classic, The Computer and the Brain [2].

Second, each unit of information or content in a digital computer has an address associated with it. The address tells where that unit of information is currently located in the computer. During its life time a given unit of information is likely to have many addresses. Sometimes it will be located in long-term storage on a hard-drive or perhaps flash-memory; if it is involved in a current operation it may be located in working RAM, or even in a register in the CPU. Notice that this distinction, between address and unit of content, is related to the fact that digital computers have a single CPU and many memory units. Much of the processing in a digital computer is devoted to keep track of where content units are in the system.

That distinction doesn’t make sense in the case of the brain, nor does the distinction between hardware and software on which it is based. Neurons have locations and at any given moment they are either sending a spike or not. But it would be a mistake to think of those spikes as analogous to the notes that Lamb’s daughter was imagining her brain sending to her hand, with a note’s address changing as it moved from brain to hand. Spikes are much simpler than that.

Third, the hardware/software distinction that is central to digital computing is meaningless in the case of brains. It is the hardware/software distinction that allows, but also forces, the digital computer to have a single active unit, CPU, serving many passive memory units. Software orchestrates how information is passed between the memory units and the CPU. There is no need for a hardware/software distinction when the basic components of your device consist of active units which are also memory units.

So what, you might be asking, are the neurons doing if not passing “notes” around? That’s a good question. We don’t quite know. My point, and Lamb’s, is that, as long as we employ the computer metaphor of passing information from place to place we’ll never know. Lamb spends the rest of his book articulating a different point of view, one about relational networks. And while he argues that point at the level relations among words and concepts in a network rather than relations among neurons in a network, I believe it holds at the neural level as well.

There is a hint of Lamb’s conceptualization in Horgan’s 2004 article where he talks of synchrony within a group of neurons:
Some evidence suggests that synchrony helps us focus our attention. If you are at a noisy cocktail party and suddenly hear someone nearby talking about you, your ability to eavesdrop on that conversation and ignore all the others around you could result from the synchronous firing of cells. “Synchrony is an effective way to boost the power of a signal and the impact it has downstream on other neurons,” says Terry Sejnowski, a computational neurobiologist at the Salk Institute. He speculates that the abundant feedback loops linking neurons allow them to synchronize their firing before passing messages on for further processing.
That last bit, about passing messages and further processing, is an intrusion from the computer metaphor, so let us forget it and think about synchrony. The synchronous neurons may be fairly widely distributed; some will be closer to the periphery while others will be further ‘upstream’. The more peripheral neurons are likely to have relatively simple response sensitivities while the more central ones will have more complex sensitivities. Whatever it is that the ensemble is responding to, information about the phenomenon is distributed across the ensemble rather than being lodged in any one part of it.

If you want to anthropomorphize the process, think of each neuron as a scout on the lookout for some one thing or category of things. When it spots it, it sends out a signal, “got it!”, which goes to all those upstream neurons with which it has connections. If the neuron is at the periphery of the nervous system, then it’s looking for got-it from a sensory cell or cells. If the neuron is upstream from the periphery, then it’s looking for some pattern of got-its from its downstream connections. When a group of neurons is synchronized, they’re all yelling got-it at the same time. Nothing is transferred from one place to another or transformed from one thing to another.

Beyond that, alas, is more than I can manage in a blog post. But do take a look at Lamb’s book.

References

[1] Sydney Lamb, Pathways of the Brain: The neurocognitive basis of language, John Benjamins, 1999.

[2] John von Neumann, The Computer and the Brain, Yale University Press, 1958.

No comments:

Post a Comment