Pages in this blog

Wednesday, September 26, 2018

Why we'll never be able to build technology for Direct Brain-to-Brain Communication

I know Elon Musk has started a company that aims to create high-bandwidth brain-machine communication (Neuralink). I have no idea what kind of technology he's got in mind (the website has nothing). I'm inclined to think, however, that any such linkage will require considerable learning in order to be usable. Here's a post from May 2013 that's about high-bandwidth brain-to-brain communication, which is different from brain-to-machine. But the basic argument applies.
* * * * *

Would it be possible, some time in the unpredictable future, for people to have direct brain-to-brain communication, perhaps using some amazing nanotechnology that would allow massive point-to-point linkage between brains without shredding them? Sounds cool, no? Alas, I don’t think it will be possible, even with that magical nanotech. Here’s some old notes in which I explore the problem.

My basic point, of course, is that brains coupled through music-making are linked as directly and intimately as computers communicating through a network (an argument I made in Chapters 2 and 3 of Beethoven’s Anvil, and variously HERE, HERE, HERE, and HERE). And, like networked computers, networked brains are subject to constraints. In the human case the effect of those constrains is that the collective computing space can be no larger than the computing space of a single unconstrained brain. This is true no matter how many brains are so coupled, despite the fact that these coupled brains have many more computing elements (i.e. neurons) than a single brain has.

The explanatory problem, as I see it, is that we tend to think of brains as consisting of a lot of elements. Thus, an effective connection between brains should consist of an element-to-element, neuro-to-neuron, hook-up, no? Compared to that, music seems pretty diffuse, though there’s no doubt that, somehow, it works.

So, let’s take a ploy from science fiction, direct neural coupling. I’ve seen this ploy used for man-machine communication (by e.g. Samuel Delaney) and surely someone has used it for human-to-human communication (perhaps mediated by a machine hub). Let’s try to imagine how this might work.

The first problem is simply one of physical technique. Neurons are very small and very many. How do we build a connector that can hook up with 10,000,000 distinctly different neurons without destroying the brain? We use Magic, that’s what we do. Let’s just assume it’s possible: Shazzaayum! It’s done.

Given our Magic-Mega-Point-to-Point (MMPTP) coupling, how do we match the neurons in one brain to those in another? After all, each strand in this cable is going to run from one neuron to another. If our nervous system were like that of C. elegans (an all but microscopic worm), there would be no problem. For that nervous system is very small (302 neurons I believe) and each neuron has a unique identity. It would thus be easy to match neurons in different individuals of C. elegans. But human brains are not like that. Individual neurons do not have individual identities. There is no way to create a one-to-one match between the neurons in one brain with corresponding neurons (having the same identity within the brain) in another; two brains don’t even have the same number of neurons much less a scheme allowing for matching identities. In this respect, neurons are like hairs, and unlike fingers and toes, where it’s easy to match big toe to big toe, index finger to index finger, and so forth.

So, that’s one problem, how to match the neurons in two brains. About all I can see to do is to match neurons on the basis of location at, say, the millimeter level of granularity. Perhaps we choose 10M or 100M neurons in the corpus callosum and just link them up. There’s another problem: How does a brain tell whether or not a given neural impulse comes from it or from the other brain? If it can’t make the distinction, how can communication take place?

Real brains don’t have any ‘free’ input-output ports. If they did, then they could be used in brain-to-brain communication. Anything coming in through such a port would thus be outside and, if you wanted to contact the other brain, send it through the proper port. But, such things don’t exist for real brains.

What, then, happens when we finally couple two people through our wonderful future-tech MMPTP? The neurons are not going to correspond in a rigorous way and they’re not going know what’s coming from within vs. outside. In that situation I would imagine that, at best, each person would experience a bunch of noise.

I haven’t got the foggiest idea how that noise would feel. Maybe it will just blur things up; but it might also cause massive confusion and bring perception, thought, and action to a crashing halt. The only thing I’m reasonably sure of is that it won’t yield the intimate and intuitive communion of one mind with another.

However, if this coupling doesn’t bring things to a halt, it’s possible that, in time (weeks? months? years?), the two individuals with their coupled brains will work things out. The brains will reorganize and figure out how to deal with one another; that is, they will learn. The self-organizing neural processes within each brain will learn to deal with activity coming from the other brain and incorporate it into their routines. [Sort of like musicians from different cultures meeting and jamming and gradually arriving at ways to play together.]

Self-organization is the key. It’s not only that individual brains are self-organized, built from inside, but that individual brains consist of many regions each of which is self-organized and quasi-autonomous. Each of these regions is connected to many other regions and is interacting with them continuously, incorporating their activities into its own self-organized patterns. [Like musicians jamming. Each makes their own decisions and their own sounds, but is listening to all the others and acting on what she hears.]

And, as I said, that’s how brains are built, from the very beginning. The process is quite different from what I did when I assembled my stereo amplifier from a kit. When I built my amplifier I laid all the parts out and assembled the basic sub-circuits. I then connected those together on the chassis and, when it was all connected, plugged it in, turned in on, and hoped for the best. That is, no electricity flowed through these components until they were all connected. [BTW, it didn’t work at first. There was a cold solder joint in the power amplifier circuit. Once I’d fixed that, I was in business.]

Brain development isn’t like that at all. The individual elements are living cells; they’re operational from birth. And the operation of one neuron affects that of its near and distant neighbors. If this were not the case, it would be impossible to construct a large and complex brain like those of vertebrates; the components wouldn’t mesh effectively. So there’s never really a magic moment like that in the life of a stereo amplifier when all the dead elements suddenly become alive. The closest we’ve got is the moment of birth, when the operational environment for the nervous system becomes dramatically changed, all of it, at once, and forever. And then it keeps on growing and developing, self-organizing (region by region) in interaction with the external world.

But brains remain forever unique. And that means that our fantasy MMPTP coupler is, in fact, no better than music. Real music, that we can make any time, that we’ve been making since before speech evolved, that’s as direct and intimate as it gets. The Vulcan Mind Meld is science fiction; music is not.

But, as I said at the beginning, musical coupling is subject to one constraint: the collective computational space of the coupled system is no larger than that of a single unconstrained brain. That means that music is a very good way for brains to mutually “calibrate” one another, to match moves and arrive at common understandings. Music creates the trust between individuals that language needs in order to be effective.

* * * * *

Note, I've included slightly different version of this post in a working paper, Coupling and Human Community: Miscellaneous Notes on the Fundamental Physical Foundations of Human Mind, Culture, and Society, February 2015, pp. 19-20, https://www.academia.edu/10777462/Coupling_and_Human_Community_Miscellaneous_Notes_on_the_Fundamental_Physical_Foundations_of_Human_Mind_Culture_and_Society

5 comments:

  1. Connecting neurons ain't the solution. Connecting brain WAVES is, or a spatiotemporal map of them. These can be made, via intermediary processing from the tech, to figure out some form of synchronization. Assuming we all use a very similar fundamental coding solution (highly likely) the intermediary will have to provide translation after co-adaptation. If a very large number of individuals hook up then a common solution may fall out. This is probably what happened during the development of the genetic code originally, allowing lateral gene transfer.

    ReplyDelete
  2. Given that the meaning of a signal in the brain is, in part, a function of where it's coming from, it's not obvious to me that looking to waves will solve the registration problem. And you're asking an awful lot of the intermediary translation system. And co-adaptation sounds like learning to me. On the whole, it's not obvious that the situation you're describing is very different from the one I described.

    ReplyDelete
  3. Hi Bill, I generally agree, but think it's neither neurons nor brain waves that will communicate.

    "Brain-to-Brain Communication" happens all the time with phone conversations. The real question is whether such conversations - and they will always be just that, not DIRECT neural/brain-wave links - can occur quietly and without complete grammar on a sort of shorthand level - like private, radio-wave tweets to someone who knows the topic of discussion? The answer to THAT question, I think, is a tentative 'yes'!

    This is somewhat analogous to the question of whether minds can be downloaded to digital mechanisms. Again, the answer is probably 'yes'; but, these won't be the complete original minds any more than someone's social media persona is reflective of their entire mind. Whether that download can be legally considered 'me' depends on whether it has autonomy, ethico-esthetic consistency with the original 'me', and whether the download contains a significant percentage of my life experiences and anecdotes at a level suitable for ongoing conceptual (or literary) analysis. This is going to have nothing directly to do with neurons or glia, any more than my daily activities have anything directly to do with quarks & gluons! Should my digital diary be attempting to record my 'life' at these ultra-reductionist levels?

    I'm thinking of the fictions from Kenyon College biologist and novelist Joan Slonczewski, such as BRAIN PLAGUE.

    Best, Mark Crosby

    ReplyDelete
  4. "This is somewhat analogous to the question of whether minds can be downloaded to digital mechanisms. Again, the answer is probably 'yes'"

    I disagree here. I think the answer is most likely NO, and for reasons similar to the impossibility of DIRECT brain-to-brain communication.

    ReplyDelete
  5. I have no idea but keeping the same metaphor across different systems which assist specialsts adjusting to the transition of moving from say literature to the unfamilar enviroment of music or vice versa.

    It looks like a plan.

    ReplyDelete