Wednesday, November 22, 2023

Further thoughts on the Chinese Room

In my tantalizing paper I all but declared Searle’s Chinese Room argument to be obsolete on the grounds that it no longer does any useful intellectual work. Let’s take another crack at that.

Back when Searle wrote that paper, 1980 and thereabouts, we really didn’t need an explanation for why computers couldn’t think. They’re not human, that seemed adequate. Yet somehow that didn’t seem quite satisfactory. Couldn’t a device with enough rules of the proper kind reason just like us? That was the challenged posed by cognitive science in general, AI in particular.

So, Searle concocted this cockamamie thought experiment to show why that ‘classical’ AI approach wouldn’t work. His idea was to isolate the intentions of the guy in the room from the relationship between the inputs to the room and the outputs generated by it. That relationship was governed entirely by the rules. And the guy in the room didn’t make the rules. He only applied them. Thus his intention is only to apply the rules and is isolated from the content of the rules.

The upshot of this thought experiment is a more refined answer to our question: Why can’t computers think? Why? Because they lack intentionality. What’s intentionality? Well, it’s something that biological organisms have. Oh. Oh? And just what does that tell us? “Not one whole fork of a lot” I thought back then. But it was something. If only we knew what intention was, something about the relationship of the organism to the environment in which it lives, no?

Still, it was something.

Now we’ve got large language models. And they do a much more convincing imitation of humans thinking and reasoning. But is it mere imitation? Or perhaps imitation isn’t the right word, as it implies intention, and they don’t have intention, do they? Let’s call it appearance: they exhibit a much more convincing appearance of humans thinking and reasoning.

Now, if we knew just what it is that humans do when we think, well then saying, they can’t think, would have explanatory value. As we don’t know how humans think the explanatory value of that statement approaches zero. And if we knew just what it is that these LLMs are doing, why then, then we wouldn’t worry about whether or not they’re thinking, would we?

Saying that they’re NOT thinking tells us (next to) nothing about what they ARE doing and tells us (next to) nothing about ourselves.

These labels – thinking, not thinking – are little but proxies for human and computer, AND we know that on extrinsic grounds. We don’t have to examine the behavior to make that determination. We can just look at the thing that’s producing the behavior and we know what kind of thing it is. But we’re NOT making that judgment on the basis of the behavior itself.

And that was the whole point of the so-called Turing Test, no? Behavior will tell. Well, it doesn’t, not in this case. 

Addendum: The Chinese Room is, in effect, a reply to the Turing Test. The Turing Test says, “behavior will tell.” The Chinese Room says, “no, it won’t.” Well, back in the days when behavior could tell, the Chinese Room, paradoxically, was a compelling argument. Now that behavior can’t tell, it’s next to worthless.

No comments:

Post a Comment