Pages in this blog

Saturday, August 22, 2020

On finding Donkey Kong transistors in a MOS 6502 microprocessor chip – Whoops! the methods of the neurosciences have problems, no?

This is from June 2019. I'm bumping it to the top of the queue because I'm thinking about these things.
A couple of days ago I posted a conversation with Rodney Brooks on the limitations of the computing metaphor as a vehicle for understanding the brain. Brooks mentioned an article, "Could a Neuroscientist Understand a Microprocessor?". The point of the article is that if you attempt to understand a microprocessor using the same methods neuroscientists use to understand the brain you're going to come up with gibberish.

I've located that article along with an informal account of the work in The Atlantic. I conclude some some observations of my own.

* * * * *

Jonas E, Kording KP (2017) Could a Neuroscientist Understand a Microprocessor? PLoS Comput Biol 13(1): e1005268. https://doi.org/10.1371/journal.pcbi.1005268
Abstract

There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.

Author Summary

Neuroscience is held back by the fact that it is hard to evaluate if a conclusion is correct; the complexity of the systems under study and their experimental inaccessability make the assessment of algorithmic and data analytic technqiues challenging at best. We thus argue for testing approaches using known artifacts, where the correct interpretation is known. Here we present a microprocessor platform as one such test case. We find that many approaches in neuroscience, when used naïvely, fall short of producing a meaningful understanding.

* * * * *

Ed Yong, Can Neuroscience Understand Donkey Kong, Let Alone a Brain? The Atlantic, June 2, 2016. From the article:
The human brain contains 86 billion neurons, underlies all of humanity’s scientific and artistic endeavours, and has been repeatedly described as the most complex object in the known universe. By contrast, the MOS 6502 microchip contains 3510 transistors, runs Space Invaders, and wouldn’t even be the most complex object in my pocket. We know very little about how the brain works, but we understand the chip completely. [...]

Even though the duo knew everything about the chip—the state of each transistor and the voltage along every wire—their inferences were trivial at best and seriously misleading at worst. “Most of my friends assumed that we’d pull out some insights about how the processor works,” says Jonas. “But what we extracted was so incredibly superficial. We saw that the processor has a clock and it sometimes reads and writes to memory. Awesome, but in the real world, this would be a millions-of-dollars data set.”

Last week, the duo uploaded their paper, titled “Could a neuroscientist understand a microprocessor?” after a classic from 2002. It reads like both a playful thought experiment (albeit one backed up with data) and a serious shot across the bow. And although it has yet to undergo formal peer review, other neuroscientists have already called it a “landmark paper”, a “watershed moment”, and “the paper we all had in our minds but didn't dare to write”. “While their findings will not necessarily be surprising for a chip designer, they are humbling for a neuroscientist,” wrote Steve Fleming from University College London on his blog. “This kind of soul-searching is exactly what we need to ensure neuroscience evolves in the right direction.”
Five Observations

First, I've noted at various times that I find philosophical arguments about the human mind/brain and computing to be rather empty, mainly because they don't usefully engage the ideas actually used in investigating the mind or the brain. They don't provide pointers for doing better, whether in computational or other terms. I suspect that some of my humanist colleagues are attracted to these arguments because they don't want to entertain, even at a distance, any explicit account of mental operations. Some of them might balk at mind-body dualism as an explicit intellectual program, but they are, effectively, mind-body dualists and therefor mysterians as well. That is they want the mind to be shrouded in mystery. Is humanistic thought (in that view) such that it cannot in principle embrace or approach explicit accounts of the mind? If so, why? Is this a methodological or a metaphysical commitment (does it matter?)?

Second, if the computer metaphor isn't adequate, does it nonetheless have some role to play in understanding the mind? For instance, I've been arguing that language is the simplest thing humans do that involves computation. In this view computation is a very high-level brain process. That implies, of course, that we're going to need other concepts for understanding the brain – such as control theory and complex dynamics. [I note in passing basic that  the arithmetic computation we learn in primary school is a highly constrained and specialized form of language.]

Third, we do know quite a bit about biological mechanisms at the molecular and cellular levels. But we don't yet know how those processes "add up to" a brain, through we're working on it. See, for example, the OpenWorm project, which is an attempt to simulate the roundworm Caenorhabditis elegans at the cellular level. C. elegans has 959 cells, including 302 neurons and 95 muscle cells. Then we have the Blue Brain Project, which is an attempt to simulate a rodent brain at some neuronal level. What's going on in these simulations? I note that, while Searle said nothing about biology in his original formulation of his well-known Chinese Room argument (against the computational view of mind), he has some more recent observations in which he explicitly references biology, wondering how much of biological mechanism is "essential for duplicating the causal powers of the original."

Fourth, the upshot of Searle's Chinese Room argument is that computation is a purely syntactic process. It cannot encompass meaning. It lacks intention and so cannot be about anything. All of which is to say, it cannot connect with a world outside itself. A Universal Turing Machine certainly seems to be that kind of thing, doesn't it?

Fifth, it is nonetheless interesting and telling that computers give us a way simulating anything we can describe in sufficient detail of just the right kind. Including neurons and brains.

No comments:

Post a Comment