Pages in this blog

Friday, September 3, 2021

The Chinese Room – I got it! I see where it’s going, or coming from. (I think)

Bump to the head of the queue. I'm thinking about this stuff. Though I should say more, I don't find that intention is very useful in distinguishing between 'real' from 'artificial' intelligence. Where do we find intention in the brain or, for that matter, the whole organism? How would we create it in an artificial being? We haven't got a clue on either score. For some other thoughts, in a somewhat different but still related context, see this post on Stanley Fish and meaning literary criticism

* * * * *
 
John Searle’s Chinese room argument is one of the best-known thought experiments in the contemporary philosophy of mind and has spawned endless commentary. I read it when it appeared in Behavioral and Brain Science in 1980 and was unimpressed [1]. Here’s a brief restatement from David Cole’s entry in The Stanford Encyclopedia of Philosophy [2]:
Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but does not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes.
The whole thing seemed to me irrelevant because it didn’t address any of the ideas and models actually used in development computer simulation of mental processes. There was nothing in there that I could use to improve my work. The argument just seemed useless to me. For that matter, most of the philosophical discussion on this has seemed useless for the same reason; it’s conducted at some remote distance from the ideas and techniques driving the research.

I remarked on this to David Hays and he replied, that yes, the philosophers will say it can’t be done but the programs will get better and better. Not mind you, that Hays thought we were on the verge of cracking the human mind or, for that matter, that I think so now. It’s just that, well, this kind of argumentation isn’t helpful.

I still believe that – not helpful – but I’m beginning to think that, nonetheless, Searle had a point. A lot depends on just what “real understanding” is. The crucial point of his thought experiment is that there was in fact a mind involved, the guy (Searle’s proxy) “manipulating symbols and numerals just as a computer does” has a perfectly good mind (we may assume). But that mind is not directly engaged in the translation. It’s insulated from understanding Chinese by the layer of (computer-like) instructions he uses to produce the result that fools those who don’t know what’s happening inside the box.

The core issue is intentionality, an enormously important if somewhat tricky term of philosophical art. David Cole glosses it:
Intentionality is the property of being about something, having content. In the 19th Century, psychologist Franz Brentano re-introduced this term from Medieval philosophy and held that intentionality was the “mark of the mental”. Beliefs and desires are intentional states: they have propositional content (one believes that p, one desires that p, where sentences substitute for “p” ).
He quotes Searle as asserting:
I demonstrated years ago with the so-called Chinese Room Argument that the implementation of the computer program is not by itself sufficient for consciousness or intentionality (Searle 1980). Computation is defined purely formally or syntactically, whereas minds have actual mental or semantic contents, and we cannot get from syntactical to the semantic just by having the syntactical operations and nothing else. To put this point slightly more technically, the notion “same implemented program” defines an equivalence class that is specified independently of any specific physical realization. But such a specification necessarily leaves out the biologically specific powers of the brain to cause cognitive processes. A system, me, for example, would not acquire an understanding of Chinese just by going through the steps of a computer program that simulated the behavior of a Chinese speaker (p.17).
We’ll just skip over Searle’s talk of semantics as I have come to make a (perhaps idiosyncratic) distinction between semantics and meaning. Let’s just put semantics aside, but agree with Searle about meaning.
 
The critical remark is about “the biologically specific powers of the brain.” Brains are living beings; computers are not. Living beings are self-organized “from the inside” – something I explored in an old post, What’s it mean, minds are built from the inside? Computers are not; they programmed “from the outside” by programmers. But living beings are not self-organized in isolation. They are self-organized in an environment and it is toward that environment that they have intentional states.

Brains are made of living cells, each active from the time it emerged from mitosis. And so we have growth and learning in development, prenatal and postnatal. At every point those neurons are living beings. And those neurons, like all living cells, are descended from those first living cells billions of years ago.

Searle’s argument ultimately rests on human biology and a belief that life cannot be “programmed from the outside”. Let us say that I am deeply sympathetic to that view. But I cannot say for sure that a mind cannot be programmed from the outside. Moreover I note that Searle’s argument originated before the flowering of machine learning techniques in the last decade or so.

There is a sense in which those computers do in fact “learn from the inside”. Programmers do not write rules for recognizing cats, playing Go, or translating from one language to another. The machine is programmed with a general capacity for learning and it learns the “rules” of a given domain itself [3]. As a result, we don’t really know what the computer is doing. We can’t just “open it up” and examine the rules it has developed.

Will such technology evolve to the point where these systems have genuine intentionality? We don’t know. They’re along way from in now, but who knows?

* * * * *

[1] Searle, J., 1980, “Minds, Brains and Programs”, Behavioral and Brain Sciences, 3: 417–57. Preprint available online, http://cogprints.org/7150/1/10.1.1.83.5248.pdf

Searle has a brief 2009 statement of the argument online at Scholarpedia: http://www.scholarpedia.org/article/Chinese_room_argument

[2] Cole, David, "The Chinese Room Argument", The Stanford Encyclopedia of Philosophy (Winter 2015 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/win2015/entries/chinese-room/

[3] For a good journalistic account of some of the recent work, see Gideon Lewis-Kraus, The Great A.I. Awakening, New York Times Magazine, December 14, 2016:

2 comments:

  1. I wonder how strongly the behaviorist in psychology would argue that either classical or operant conditioning is indeed programing a person from the outside?

    ReplyDelete