Honestly, I don’t know. Something’s been going on in how I think about this kind of question, but I’m not quite sure what it is. It seems mostly intuitive and inarticulate. This is an attempt at articulation.
I’m sure that I’m at least irritated at the idea that computers will inevitably emerge. The level of irritation goes up when people start predicting when this will happen, especially when the prediction is, say, 30 to 50 years out. Late last year the folks at Open Philanthropy conducted an elaborate exercise in predicting when “human-level AI” would emerge. The original report is by Ajeya Cotra and is 1969 pages long; I’ve not read it. But I’ve read summaries: Scott Summers at Astral Codex Ten and Open Philanthropy’s Holden Karnofsky at Cold Takes). Yikes! It’s a complicated intellectual object with lots of order-of-magnitude estimates and charts. But as an effort at predicting the final – big fanfare here – emergence of artificial general intelligence (AGI). It strikes me as being somewhere between over-kill and pointless. No doubt the effort itself reinforces their belief in the coming of AGI, which is a rather vague notion.
And yet I find myself reluctant to say that some future computational system will never “think like a human being.” I remember when I first read John Searle’s famous Chinese Room argument about the impossibility of artificial intelligence. At the time, 1980 or so, I was still somewhat immersed in the computational semantics work I’d done with David Hays and was conversant with a wide range of AI literature. Searle’s argument left it untouched. Such a wonder would lack intentionality, Searle argued, and without intentionality there can be no meaning, no real thought. At the time “intentionality” struck me as being a word that was a stand-in for a lot of things we didn’t understand. And yet I didn’t think that, sure, someday computers will think. I just didn’t find Searle’s argument convincing. Not then, and not now, not really.
Of course, Searle isn’t the only one to argue against the machine. Hubert Dreyfus did so twenty years before Searle, and on much the same grounds, and others have done so after. I just don’t find the argumentation very interesting.
It seems to me that those arguing strongly against AI implicitly depend on the fact that we don’t have such systems yet. They also have a strong sense of the difference between artificial inanimate systems, like computers, and living beings, like us. Those arguing in favor are depending on the fact that we cannot know the future (no matter how much they try to predict it), which is when these things will happen. They also believe that while, yes, animate and inanimate systems are different, they are both physical systems. It’s the physicality that counts.
None of that strikes me as a substantial basis for strong claims for or against the possibility of machines thinking at the level of humans.
Meanwhile, the actual history of AI has been full of failed predictions and unexpected developments.
We just don’t know what’s going on.
Addendum, 3.29.22: Is the disagreement between two views constructed within the same basic conceptual framework (“paradigm”), or is the disagreement at the level of the underlying framework?
See also, A general comment concerning arguments about computers and brains, February 16, 2022.
No comments:
Post a Comment