Sunday, February 5, 2017

Mark Changizi on why we're not going to build an artificial human brain anytime soon, if ever

This is from an article Changizi published in Discover in 2011. After making the point that digital computers are easily delineated into parts and whole on distinctly different physical and functional level, Changizi observes:
In fact, when scientists create simulations that include digital circuits evolving on their own—and include the messy voltage dynamics of the transistors and other lowest-level components—what they get are inelegant “gremlin” circuits whose behavior is determined by incidental properties of the way transistors implement gates. The resultant circuits have blurry joints—i.e., the distinction between one level of explanation and the next is hazy—so hazy that it is not quite meaningful to say there are logic gates any longer. Even small circuits built, or evolved, in this way are nearly indecipherable.

Are brains like the logical, predictable computers sitting on our desks, with sharply delineated levels of description? At first glance they might seem to be: cortical areas, columns, microcolumns, neurons, synapses, and so on, ending with the genome.

Or, are brains like those digital circuits allowed to evolve on their own, and which pay no mind to whether or not the nakedest ape can comprehend the result? Might the brain’s joints be blurry, with each lower level reaching up to infect the next? If this were the case, then in putting together an artificial brain we don’t have the luxury of just building at one level and ignoring the complexity in levels below it.

Just as evolution leads to digital circuits that aren’t comprehensible in terms of logic gates—one has to go to the transistor level to crack them—evolution probably led to neural circuits that aren’t comprehensible in terms of neurons. It may be that, to understand the neuronal machinery, we have no choice but to go below the neuron. Perhaps all the way down.

No comments:

Post a Comment