Monday, September 14, 2020

Brain-to-brain thought transfer @ 3QD

My current post at 3 Quarks Daily is about a subject I’ve been thinking about since early in 2002, direct transfer of thought between people through technology linking their brains together:


According to my notes I “ran” a thought experiment on the matter and concluded that it was impossible. That was some time in January of 2002. I subsequently posted that Gedankenexperiment here at New Savanna in May 2013 and have been revisiting the topic now and then.

In that original thought experiment I assumed that the relevant technology was available – though in fact creating it is something we could not do then and still cannot – because that’s what you do in thought experiments. You create a highly constrained artificial situation – in this case, we can couple two brains together at the neuron level for millions and millions of neurons – so that you can explore something else – in this case, the effect of such coupling. Here’s my conclusion from that original thought experiment:
Given our Magic-Mega-Point-to-Point (MMPTP) coupling, how do we match the neurons in one brain to those in another? For each strand in this cable is going to run from one neuron to another. If our nervous system were like that of C. elegans, there would be no problem. For that nervous system is very small and each neuron has a unique identity. It would be easy to match neurons in different individuals of C. elegans. But human brains are not like that. Individual neurons do not have individual identities. There is no way to match the neurons in one brain with those in another.

What, then, happens when we couple two people through a MMPTP? Each experiences a bunch of noise, that’s what. I haven’t got the foggiest idea how that noise will feel. Maybe it will just blur things up; but it might also cause massive confusion and bring perception, thought, and action to a crashing halt. But it won’t yield the intimate and intuitive communion of one mind with another.
More recently I realized that there’s a simpler problem: “How does a brain tell whether or not a given neural impulse comes from it or from the other brain? If it can’t make the distinction, how can communication take place?”

Here’s my problem: Given that direct brain-to-brain coupling won’t work, why does it keep coming up, not in science fiction, where such things are fair game, but in real proposals from intelligent thinkers (Elon Musk, Christoph Koch, or Rodolfo Llinás) who, it seems to me, ought to know better? None of the proposals I’ve seen counter the objections I’ve uncovered in my thought experiment. Now, let me be clear, I’m not complaining that they haven’t read that thought experiment. I’ve only posted it in two places, a private online venue back in 2002 (Howard Rheingold’s Brainstorms) and here at New Savanna, which doesn’t get much traffic. No, my complaint is that they haven’t seemed to have thought of those objections themselves.

One possibility is that my reasoning is faulty. If so, I’d like to know where I went wrong. Pending that, however, I have another suggestion.

Of course, it’s a cool idea. But there’s something else, something that William Powers expressed in a letter to me over four decades ago. The letter was a response to an article on Shakespeare’s Sonnet 129 I’d published in MLN in 1976 [1]. Here’s what Powers said [2]:
There are always two levels of modeling going on. At one level, modelling consists of constructing a structure that, by its own rules, would behave like the system being modelled, and if one is lucky produce that behavior by the same means (the same inner processes) as the system being modelled. That kind of model “runs by itself”; given initial conditions, the rules of the model will generate behavior.

But the other kind of modelling is always done at the same time: the modeller provides for himself some symbolic scheme together with rules for manipulating the symbols, for the purpose of reasoning about the other kind of model. [...]

The biggest problem in modeling is to remain aware of which model one is dealing with. Am I inside my own head reasoning about the model, or am I inside the model applying its rules to its experiences? This is especially difficult to keep straight when one is talking about cognitive processes; unless one is vividly aware of the problem one can shift back and forth between the two modes of modeling without realizing it.
That, that last paragraph, is where I think the problem lies. These various suggestions for direct brain-to-brain linkage hardly have anything explicit enough to be called a model. They’re verbal suggestions and little more. Beyond the idea that we’re going to have a lot of wires running between these two brains, or perhaps high-bandwidth radio transmission between two brain interfaces, there’s nothing to these ideas. When, almost 20 years ago, I asked, How are we going to match-up neurons in the two brains? I slipped into a level of detail which, as far as I can tell, none of these other thinkers have broached.

Consequently, they’ve lost sight of the distinction Powers makes between the object model, with its own explicit rules, and the thinker’s own “symbolic scheme together with rules for manipulating the symbols, for the purpose of reasoning about the other kind of model.” That is to say, it is obvious to us, standing outside the two linked brains and observing them, that in each brain, some impulses arise within the brain while others originate in the other brain. The fact that we know that does not, however, mean that the two brains know it. Therein lies the problem. It simply hadn’t occurred to Musk, Koch, or Llinás that they were, in effect, projecting their own knowledge about the situation onto the brains they’ve coupled together, if only in their imaginations.

Both Koch and Llinás are distinguished neuroscientists. But they didn’t establish their reputations on the basis of such speculations. They earned their reputations for thorough empirical investigation and modeling of somewhat more limited imaginative scope, if you will. The problem Powers outlined is much less likely to occur in that work. As for Musk, as far as I know, he has no particular expertise in neuroscience at all. And yet, as an engineer and entrepreneur who must have a keen appreciation for the distinction between plans and designs and their physical realization, I almost expect more from him on this issue than the neuroscientists.

* * * * *

[1] William Benzon, Cognitive Networks and Literary Semantics, MLN 91: 1976, 952-982, https://www.academia.edu/235111/Cognitive_Networks_and_Literary_Semantics.

[2] William T. Powers, “Powers on Benzon and Models”, MLN, Vol. 91, No. 6, Comparative Literature (Dec., 1976), pp. 1612-1619, http://www.jstor.org/stable/2907155.

No comments:

Post a Comment