First Thoughts
I don't know quite what I think about that Chinese room. When I read it in Brain and Behavior Science I was puzzled. So what, thought I to myself, so what? At the time I'd been reading widely in computational linguistics, AI, and cognitive science, as was deep into computational semantics myself. Searle's argument didn't address any of the ideas or techniques discussed in that rather broad literature. There wasn't anything in his argument that was of much use to someone either trying to figure out how the mind works or trying to get a computer to do something deep and interesting with language. As far as I know, most arguments about computers and minds are like that.
I don't know quite what I think about that Chinese room. When I read it in Brain and Behavior Science I was puzzled. So what, thought I to myself, so what? At the time I'd been reading widely in computational linguistics, AI, and cognitive science, as was deep into computational semantics myself. Searle's argument didn't address any of the ideas or techniques discussed in that rather broad literature. There wasn't anything in his argument that was of much use to someone either trying to figure out how the mind works or trying to get a computer to do something deep and interesting with language. As far as I know, most arguments about computers and minds are like that.
Now, let us assume for the moment that we have computer systems that deliver high quality machine translation. By high quality I mean the translations are as good as the best human translators can produce. These translations are suitable for legal and literary purposes. When Shakespeare is back translated into the original Klingon, native Klingons are pleased with the result. In that case, however, Searle's argument still holds. But it would seem rather thin and unsubstantial.
Of course, we don't have such computer systems and I don't see any on the horizon. Surely we can make better systems than we've got. But Searle's argument has little or nothing in it that tells us what to do.
So, yeah, sure, computers can't think. They're not living organisms. I don't remember whether or not that was explicit in the argument I read in BBS, but I don't remember it. I found it in some of Searle's more recent versions. And I think he's right about that, that it takes a biological organism to think. And...?
Second Thoughts
So, in order to refresh myself I looked up the argument in the Internet Encyclopedia of Philosophy and found this summary of a 1990 version of the argument:
Besides the Chinese room thought experiment, Searle's more recent presentations of the Chinese room argument feature - with minor variations of wording and in the ordering of the premises - a formal "derivation from axioms" (1989, p. 701). The derivation, according to Searle's 1990 formulation proceeds from the following three axioms (1990, p. 27):(A1) Programs are formal (syntactic).(A2) Minds have mental contents (semantics).(A3) Syntax by itself is neither constitutive of nor sufficient for semantics.to the conclusion:(C1) Programs are neither constitutive of nor sufficient for minds.Searle then adds a fourth axiom (p. 29):(A4) Brains cause minds.from which we are supposed to "immediately derive, trivially" the conclusion:(C2) Any other system capable of causing minds would have to have causal powers (at least) equivalent to those of brains.whence we are supposed to derive the further conclusions:(C3) Any artifact that produced mental phenomena, any artificial brain, would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program.(C4) The way that human brains actually produce mental phenomena cannot be solely by virtue of running a computer program.On the usual understanding, the Chinese room experiment subserves this derivation by "shoring up axiom 3" (Churchland & Churchland 1990, p. 34).
My statements should be taken as applying to that argument; note, in particular, A2 and C2. Now I said that computational semantics was well-developed by 1980. Did in thereby mean that computers have mental contents (A2)? No. I meant more or less that they had, say, morphology and syntax, but they also had something else, something that could be called semantics. Obviously that's not what Searle means by semantics (A2). Computers would have to have the causal powers of brains (C2) in order to have a mind (A4). That's all OK.
But here I am with a computer system where I make a distinction between something I call syntax and something I call semantics. I know we can do better, but Searle's argument is of no help. Nor is it of any help if I'm psychologist or neuroscientist making observations about and constructing models of human behavior and brain activity.
Third Thoughts
Upon further reflection, first and second thoughts are not fully consistent. In my first thoughts I imagined a computer system capable of high quality machine translation. Was I imagining that it had the causal powers of brains (thereby allowing it to have semantics)? No, I wasn't really. I wasn't thinking about brains until the fourth paragraph, and then just barely so. Now, if in fact we could be able to build computers having the causal powers of brains, well then, Searle's argument would appear to be OK. But whatever we did to build such computers, we didn't get any help from Searle.
Third Thoughts
Upon further reflection, first and second thoughts are not fully consistent. In my first thoughts I imagined a computer system capable of high quality machine translation. Was I imagining that it had the causal powers of brains (thereby allowing it to have semantics)? No, I wasn't really. I wasn't thinking about brains until the fourth paragraph, and then just barely so. Now, if in fact we could be able to build computers having the causal powers of brains, well then, Searle's argument would appear to be OK. But whatever we did to build such computers, we didn't get any help from Searle.
No comments:
Post a Comment