Monday, April 5, 2021

On the incoherence of the idea of brain-to-brain thought transfer [once again]

Once again I find myself thinking about the (im)plausibility of direct brain-to-brain communication of thoughts, most recently hyped by Elon Musk in conjunction with his Neuralink project. What I’m wondering is why Musk hasn’t (seemed to have) thought about the most obvious objections to the project, and not only Musk, but others who have proposed such linkage (Christof Koch and Rodolfo Llinás). I’ve addressed that issue once before (Brain-to-brain thought transfer @ 3QD) but I’d like to take another shot at it.

Let’s start by observing that the ideas of “thoughts” and “thinking” are commonsense ideas and have not been defined in neural terms. There is thus a conceptual GAP between them and the terms used talk about brains, even in the most casual way. To connect the two realms one must think it through. And just what does that mean? What items in each realm must be considered?

It’s all well a good to say that thoughts and thinking exist in the brain, but that assertion, with which I occur, doesn’t tell us how those things exist in the brain. How many neurons does it take to support a single thought – one, 10, 89, 3000, more? Do they have to be in the same region of the brain or can they be spread out? If so, how far, a hemisphere, the whole brain? How do thoughts flow through axons? Can a whole thought be squeezed through a single axon? These questions may seem (faintly) ridiculous, but are they? What is clear is they aren’t the terms in which neuroscientists investigate the brain.

And that’s fine, really. I have no problem with that. But it does mean that the relationship between thought/thinking and neurons is left undefined. Talk of brain-to-brain linkage, however, takes place in terms of neurons and brain tissue. Such talk tends to be rather loose and vague, though talk of more modest neural interfaces (which, for example, allow for neural control of prosthetic limbs) is quite precise; such things, after all, have actually been built.

Here’s what I suspect is going on. When these people – Musk, Koch, Llinás – think about the issue they start with the existence of various technology connecting brains with outside devices of one kind or another. These exist in one form or another. They then imagine having a whole lot of them (Koch writes of 10s of millions) running between two brains. That sets up (some kind of) a communication channel between the brains. They then apply what cognitive scientists call the conduit metaphor and, viola! direct brain-to-brain thought transfer. Of course they don’t explicitly think, “and now we apply the conduit metaphor” – that’s not how these things go, is it? – they just do it.

What is the conduit metaphor? (See this post for a more careful explanation of it.) It is the idea that we communicate with one another through some kind of conduit, often imaginary. After all, when we converse – surely the prototypical case of human communication – there is nothing between us but the air. Note, though, that all we exchange directly in communication are word forms. The ideas aren’t themselves in those word forms. Rather, ideas are associated with them in the processes of comprehension or production. But those ideas exist only in our heads (minds, brains), not in the signals.

When we apply the conduit metaphor to this artificially constructed brain-to-brain communication channel, thought transfer is automatic. Note in particular that the problem I identified – in this post at 3 Quarks Daily and earlier at this post here at New Savanna – is that a brain has no way of telling whether a neural impulse comes from within itself or from another brain, or, for that matter, an investigator or surgeon zapping the brain with an electrical current. Regardless of where it comes from, an impulse is an impulse is an impulse. They’re all alike. If neural impulses can’t be separated into mine and thine, then how can there be thought transfer?

But – and here’s the important point – this problem doesn’t arise in ordinary speech communication. You know who you’re talking to and there’s no trouble distinguishing their words from yours. Thus there is nothing in conduit metaphor to tell you to check who’s sending a given signal. The conduit metaphor simply isn’t rich enough to handle the problem of direct brain-to-brain communication.

No comments:

Post a Comment