My basic complaint about such arguments (by, e.g., Dreyfus, Searle, see my most recent post on this topic) is that they don’t engage with the actual techniques used by computer systems nor, for that matter, do they engage with what the various psychologies have to say about the mind/brain. So, if I am an AI researcher, these arguments don’t tell me anything I can use to improve the systems I design. And if I research the human mind/brain, they don’t tell me anything about that either.
The arguments are mostly rhetorical in aim and force, not conceptual. And they get their rhetorical force from the fact that current systems are far from being intelligent, whatever that is, in the way that humans are.
Thus an argument couched in theological terms could potentially have the same rhetorical force though its conceptual equipment would be quite different. One might argue, for example, that human actions are manifestations of divine design while the actions of computers are those of mere uninspired machines. For all I know, someone somewhere is offering such arguments. But that kind of argument would have to be directed to an audience for whom such arguments have conceptual currency. That’s not the case with that audience Dreyfus and Searle are addressing, with Dennett and others on the other side of the argument. In either case it is true that no existing computer system provides an inescapable counter example.
Both arguments depend on the relative incapacity of current systems and neither engages with actual work on computer systems or on the mind. They are BOTH conceptually empty though rhetorically strong. Thus, for example, neither of these accounts have anything useful to say about the problem of commonsense knowledge, which certainly exists for AI of whatever kind and which hasn’t really been addressed in psychology at all, though a great deal of psychology is relevant to the problem.
More later.
More later.
No comments:
Post a Comment