From a long post by David Chapman, How should we evaluate progress in AI?, at Meaningness:
Because AI investigates artificial intelligence, its central questions are not necessarily scientifically interesting. They are interesting for biology only to the extent that AI systems deliberately model natural intelligence; or to the extent that you can argue that there is only one sort of computation that could perform a task, so biology and artificial intelligence necessarily coincide. This may be true of the early stages of visual processing, for example.AI is mostly not about what nature does compute (science), nor about what we can compute today (engineering), nor about what could in principle be computed with unlimited resources (mathematics). It is about what might be computed by machines we might realistically build in the not-too-distant future. As this essay goes along, I will suggest that AI’s criterion of interestingness is therefore closer to that of philosophy of mind than to those of science, engineering, or mathematics.
I like that, the way it situates AI betwixt and between. That's why this post exists, the rest is gravy.
Chapman goes on to assert:
The problem—in both psychology and AI—is not bad scientists. It is that the communities have had bad epistemic norms: ones that do not reliably lead to new truths. Individual researchers do what they see other, successful researchers doing. We can’t expect them to do otherwise—not without a social reform movement.
OK. Though...don't know about psychology, but AI, maybe AI needs to take its project up a whole cultural rank, to Rank 5 – which is, as far as I can tell, mostly a gleam in various folks' eyes.
Later on:
On the other hand, analytic philosophy of mind’s criterion for what counts as “interesting” largely coincides with, and formed, that of AI. From its founding, AI has been “applied philosophy” or “experimental philosophy” or “philosophy made material.” The hope is that philosophical intuitions could be demonstrated technically, instead of just argued for, which would be far more convincing. I share that hope.Two fundamental intuitions most analytic philosophers of mind want to prove are:
- Materialism (versus mind/body dualism): mental stuff is really just physical stuff in your brain.
- Cognitivism (versus behaviorism): you have beliefs, consider hypotheticals, make plans, and reason from premises to conclusions.
These are apparently contradictory. “Hypotheticals” do not appear to be physical things. It is difficult to see how the belief “Gandalf was a wizard” could both be in your head and about Gandalf, as a physical fact. And so on.This tension generated the problem space for GOFAI. The intuition of all cognitive scientists (including me! until 1986) was that this conflict must be resolvable; and that its resolution could be proven, beyond all possibility of doubt, via technical implementation. [...]How did we go so wrong for so long with GOFAI? I think it was by inheriting a pattern of thinking from analytic philosophy: trying to prove metaphysical intuitions with narrative arguments. We knew we were right, and just wanted to prove it. And the way we went about proving it was more by argument than experiment.Eventually, obstacles to the GOFAI agenda appeared to be matters of principle, not just matters of limited technical or scientific know-how, and it collapsed.
And so forth. Chapman goes on to suggest that AI is more like architectural design than engineering. Engineering starts with a well-defined problem. Architectural design not so much:
Design, like engineering, aims to produce useful artifacts. Unlike engineering, design addresses nebulous (poorly characterized) problems; is not confined to explicit, rational methods; and develops snazzy—not optimal—solutions. [...]Design concentrates on synthesis, more than analysis. Since the problem statement is nebulous, it doesn’t provide helpful guiding implications; but neither does it strongly constrain final solutions. Design, from early in the process, constructs trial solutions from plausible pieces suggested by the concrete problem situation. Analysis is less important, and comes mostly late in the process, to evaluate how good your solution is.Since design problems are nebulous, there is no such thing as an optimal solution. The evaluation criterion might be called “snazziness” instead. A good design is one people like. It should make you go “whoa, cool!” An outstanding design amazes. Design success means not that you solved a specific problem as given, but that you produced something both nifty and useful in a general vicinity. (The products of design, unlike those of art, have to work as well as wow.)
A bit later, in the context of empirical studies of design practice:
And so forth and so on:
First, a designer maintains contact with the messy concrete specifics of the problem throughout the process. An engineer, by contrast, operates primarily in a formal domain, abstracted from the mess.This I like, "maintains contact..."
And so forth and so on:
Analogously, I believe there is significantly less to current spectacular demos of “deep learning” than meets the eye. This is not mainly general cynicism about spectacles, nor skepticism about AI demos in general, nor dislike of deep learning in particular. (Although the deep learning field’s relative lack of interest in explanation does make it easier for researchers to fool themselves.) Primarily, it’s based on my guesses about specifically how these systems accomplish the tasks they are shown performing in the demos; and from that, how likely they are to accomplish tasks that may appear similar but aren’t.
No comments:
Post a Comment