Pages in this blog

Tuesday, July 26, 2022

Once more around the merry-go-round: Is the brain a computer?

I have argued I don’t know how many times that language is the simplest human activity than must be considered computational (e.g. in this comment on Yann LeCun’s latest proposal). That necessarily implies that, fundamentally, the brain is something else, but what?

What’s a computer?

Let’s step back a bit and consider an argument by John Searle. It may seem a bit strange, but bear with me.

In 1950, Alan Turing published an article in which he set out the Turing Test.1 The purpose of the test was to establish whether a computer had genuine intelligence: if an expert cannot distinguish between human intelligent performance and computer performance, then the computer has genuine human intelligence. It is important to note that Turing called his article “Computing Machinery and Intelligence.” In those days “computer” meant a person who computes. A computer was like a runner or a singer, someone who does the activity in question. The machines were not called “computers” but “computing machinery.”

The invention of machines that can do what human computers did has led to a change in the vocabulary. Most of us now think of “computer” as naming a type of machinery and not as a type of person. But it is important to see that in the literal, real, observer-independent sense in which humans compute, mechanical computers do not compute. They go through a set of transitions in electronic states that we can interpret computationally. The transitions in those electronic states are absolute or observer independent, but the computation is observer relative. The transitions in physical states are just electrical sequences unless some conscious agent can give them a computational interpretation.

This is an important point for understanding the significance of the computer revolution. When I, a human computer, add 2 + 2 to get 4, that computation is observer independent, intrinsic, original, and real. When my pocket calculator, a mechanical computer, does the same computation, the computation is observer relative, derivative, and dependent on human interpretation. There is no psychological reality at all to what is happening in the pocket calculator. [1]

He goes on in this vein for a bit and then arrives at this statement: “First, a digital computer is a syntactical machine. It manipulates symbols and does nothing else.” If you are familiar with his famous Chinese Room argument then you’ve heard this before. After a brief precis of that argument, which is of no particular interest here, Searle arrives at the point that interests me:

Except for the cases of computations carried out by conscious human beings, computation, as defined by Alan Turing and as implemented in actual pieces of machinery, is observer relative. The brute physical state transitions in a piece of electronic machinery are only computations relative to some actual or possible consciousness that can interpret the processes computationally. It is an epistemically objective fact that I am writing this in a Word program, but a Word program, though implemented electronically, is not an electrical phenomenon; it exists only relative to an observer.

Of course, he’s already said this before, but I repeat it because it’s a strange way of talking – at least I found it strange when I first read it – and so a bit of repetition is worthwhile.

Physically, a computer is just an extremely complex pile of electronic machinery. But we have designed it in such a way that the state transitions in its circuitry perform operations that we find useful. Most generally, we think of them as computation. The machinery is a computer because it has been designed to be one.

Is the brain a computer?

With Searle’s argument in mind we can now ask: Is the human brain, or any brain, a computer? Physically it is certainly very different from any electronic computer, but it does seem to be a complex meshwork that transmits many electrochemical signals in complex patterns. Are those signals performing calculations? Given Searle’s argument the answer to that question would seem to depend on just what the brain was designed to do. But, alas, that brain wasn’t designed in any ordinary sense of the word, though evolutionary biologists do sometimes talk about evolution as a process of design. But if so, it is design without an designer.

Given this, does it make sense for us to say that the brain IS a computer. I emphasize the “is” because, of course, we can simulate brains and parts of brains, but a simulation is one thing, and the thing being simulated is quite something else. The simulation of an atomic explosion is not the same as a real atomic explosion. Or, to switch terms, as Searle remarks, “Even with a perfect computer emulation of the stomach, you cannot then stuff a pizza into the computer and expect the computer to digest it.”

So, I’m not talking about whether or not we can produce a computer simulation of the brain. Of course we can. I’m talking about the brain itself. Is it a computer? Consider this passage:

This brings us to the question: what are the type of problems where generating a simulation is a more viable strategy than performing a detailed computation? And if so, what are the kind of simulators that might be relevant for consciousness? The answer to the first question has to do with the difference of say computing an explicit solution of a differential equation in order to determine the trajectory of a system in phase space versus mechanistically mimicking the given vector field of the equation within which an entity denoting the system is simply allowed to evolve thereby reconstructing its trajectory in phase space. The former involves explicit computational operations, whereas the latter simply mimics the dynamics of the system being simulated on a customized hardware. For complex problems involving a large number of variables and/or model uncertainly, the cost of inference by computation may scale very fast, whereas simulations generating outcomes of models or counterfactual models may be far more efficient. In fact, in control theory, the method of eigenvalue assignment is used precisely to implement the dynamics of a given system on a standardized hardware. [...] if the brain is indeed tasked with estimating the dynamics of a complex world filled with uncertainties, including hidden psychological states of other agents (for a game-theoretic discussion on this see [1–4,9]), then in order to act and achieve its goals, relying on pure computational inference would arguably be extremely costly and slow, whereas implementing simulations of world models as described above, on its cellular and molecular hardware would be a more viable alternative. These simulation engines are customized during the process of learning and development to acquire models of the world.[2]

That suggests that, no, the brain is not a computer, not if by that you mean that it is performing explicit numerical calculations. It isn’t performing calculations at all. It’s just passing signals between neurons, thousands and thousands of them each second. What those signals are doing, what they are achieving, depends on how the whole shebang is connected to the world in which the brain operates.

Consider experiments on mental rotation [3]. A subject is presented with a pair of 2-D or 3-D images. In some cases the images depict the same object, but from different points of view; it other cases the images depict two different objects. The subject is asked whether or not the images depict the same object. To perform the task the subject has to mentally rotate one of the images until it matches the other. But a match will be achieved only if the images depict the same object. If no match can be achieved then the subject is looking at two different objects.

What researchers found is that the length of time required to reach a decision was proportional to the angle between the two views. The larger the angle, the longer it takes to make a decision. If the process were numerical there’s no reason to believe that the computation time would be proportional to the size of the angle being computed. That strongly suggests that the process is analog, not numerical. If the brain IS a computer, it’s not a digital computer.

For various reasons – those experiments are only one of them – I have long been of the view that, at least at the sensorimotor level, the brain constructs quasi-analog models of the world and uses them in tracking the sensory field and in generating motor actions for operating in the world. These models are also called on in much of what is called common-sense knowledge, which proved to be very problematic for symbolic computation back in the world of GOFAI models in AI from the beginning up into the 1980s and which is proving somewhat problematic for current LLMs. In any given situation one simply calls up the necessary simulations and then generates whatever verbal commentary seems necessary or useful. GOFAI investigators were faced with the task of hand-coding a seemingly endless collection of propositions about common sense matters while LLMs are limited by the fact that they only have access to text, not the underlying simulations on which the text is based.

I arrived at this view partially on the basis of an elegant book from 1973, Behavior: The Control of Perception, by the late William Powers. As the title indicates, he developed a model of human behavior from classical control theory.

References

[1] John R. Searle, What Your Computer Can’t Know, The New York Review of Books, October 9, 2014, http://www.nybooks.com/articles/2014/10/09/what-your-computer-cant-know/

[2] Arsiwalla, X.D., Signorelli, C.M., Puigbo, JY., Freire, I.T., Verschure, P.F.M.J. Are Brains Computers, Emulators or Simulators? In V. Vouloutsi et al. (Eds.) Living Machines 2018. Lecture Notes in Computer Science, vol 10928. Springer, https://doi.org/10.1007/978-3-319-95972-6_3.

[3] Mental rotation, Wikipedia, https://en.wikipedia.org/wiki/Mental_rotation.

2 comments:

  1. The argument that the brain can't be a computer, because computation is an observer-relative notion is a much better foil for computationalist models than the Chinese Room ever was. Ultimately, a computer is a device that extends the mind of its user---the mind that seems to stare back at you out from the box is just a reflection of your own. Computers augment minds with certain syntactical capacities they don't natively possess, such as the speedy manipulation of symbols, or near-perfect recall of data; but their semantic capacities are inherited---it is only by virtue of the computer's user interpreting the symbols the computer manipulates as standing for certain objects, whether concrete or abstract (as in the case of numbers), that the question of what it computes is decided. Hence, no computation without interpreting mind; consequently, mind itself can't be computational, on pain of regress. I've elaborated on this on 3QD, the best result of which was that I was contacted by David Ellermann, who has developed an explicit Turing Machine based version of the argument that I think is the clearest version I know (which can be found in chapter 7 here: https://library.memoryoftheworld.org/#/book/50ebec6e-3d30-4094-9079-04694f7db07a).

    ReplyDelete
  2. Thanks, Jochen. Ironically, it's Searle's argument in both cases. Thanks for the link.

    ReplyDelete