In The End of Science John Horgan has suggested that something called the ‘neural code’ as the Holy Grail of neuroscience, indeed, perhaps of all of science (from the preface to the 2015 edition): “The neural code is arguably the most important problem in science—and the hardest.” But it is so far proving elusive. Here are some passages from an article he posted in 2016:
Koch doubts, however, that the neural code “will be anything as simple and as universal as the genetic code.” Neural codes seem to vary in different species, he notes, and even in different sensory modes within the same species. “The code for hearing is not the same as that for smelling,” he explains, ”in part because the phonemes that make up words change within a tiny fraction of a second, while smells wax and wane much more slowly.”
“There may be no universal principle” governing neural-information processing, Koch says, “above and beyond the insight that brains are amazingly adaptive and can extract every bit of information possible, inventing new codes as necessary.” So little is known about how the brain processes information that “it’s difficult to rule out any coding scheme at this time.”
A bit later:
British neurobiologist Steven Rose suspects that the brain processes information at scales both above and below the level of individual neurons and synapses, via genetic, hormonal, and other processes. He therefore challenges a key assumption of Singularitarians, that spikes represent the sum total of the brain's computational output. The brain’s information-processing power may be many orders of magnitude greater than action potentials alone suggest.
Moreover, decoding neural signals from individual brains will always be extraordinarily difficult, Rose argues, because each individual’s brain is unique and ever-changing. To dramatize this point, Rose poses a thought experiment involving a “cerebroscope,” which can record everything that happens in a brain, at micro and macro levels, in real time.
Let's say the cerebroscope records all of Rose's neural activity as he watches a red bus coming down a street. Could the cerebroscope reconstruct what Rose is feeling? No, because his neural response to even that simple stimulus grows out of his brain's entire previous history, including a childhood incident when a bus almost ran him over.
To interpret the neural activity corresponding to any moment, Rose elaborates, scientists would need “access to my entire neural and hormonal life history” as well as to all his corresponding experiences.
That resonates with remarks by the late Walter Freeman that Horgan mentioned in an earlier piece on the neural code:
Then there is the chaotic code championed by Walter J. Freeman of the University of California at Berkeley. For decades, he has contended that far too much emphasis has been placed on individual neurons and action potentials, for reasons that are less empirical than pedagogical. The action potential “organizes data, it is easy to teach, and the data are so compelling in terms of the immediacy of spikes on a screen.” But spikes are ultimately just “errand boys,” Freeman says; they serve to convey raw sensory information into the brain, but then much more subtle, larger-scale processes immediately take over.
The most vital components of cognition, Freeman believes, are the electrical and magnetic fields, generated by synaptic currents, that constantly ripple through the brain. [...]
The uniqueness of each individual represents a fundamental barrier to science’s attempts to understand and control the mind. Although all humans share a “universal mode of operation,” says Freeman, even identical twins have divergent life histories and hence unique memories, perceptions, predilections. The patterns of neural activity underpinning our selves keep changing throughout our lives as we learn to play checkers, read Thus Spoke Zarathustra, fall in love, lose a job, win the lottery, get divorced, take Prozac.
AI researcher Yann LeCun has made some remarks that seem relevant to me. This is from a podcast quoted by Kenneth Church and Mark Liberman in a recent article, The Future of Computational Linguistics: On Beyond Alchemy:
All of AI relies on representations. The question is where do those representations come from? So, uh, the classical way to build a pattern recognition system was . . . to build what’s called a feature extractor . . . a whole lot of papers on what features you should extract if you want to recognize, uh, written digits and other features you should extract if you want to recognize like a chair from the table or something or detect...
If you can train the entire thing end to end—that means the system learns its own features. You don’t have to engineer the features anymore, you know, they just emerge from the learning process. So that, that, that’s what was really appealing to me.
That second paragraph is the important one. It means, to use a phrase I’ve come to favor, that the mind is built from the inside.
And that, it seems to me, is what makes artificial neural nets (ANNs) so interesting and powerful. Individual ‘neurons’ in these nets resemble real neurons about as much as the smiley face emoticon resembles the Mona Lisa. ANNs depend on backpropagation, which doesn’t seem to exist in real brains (see, for example, Grace Lindsey, Models of the Mind, p. 82). But ANNs learn ‘from the inside’ because, like real nervous systems, they are fully structured ‘end-to-end’ (receiving external inputs and generating external outputs).
No comments:
Post a Comment