The folks at Language Log have been having an interesting discussion about machine translation, "They called for more structure", that was started by a passage from Donald Barthleme. Down in that discussion Jason Eisner has a useful remark:
The field of AI includes both neat and scruffy approaches. A neat system for MT would be a faithful implementation of some linguistic theory. Current leading MT systems are somewhat scruffy. They contain various hacks and shortcuts that help to produce a decent translation quickly.Researchers with a scruffy-AI mindset may think that's just fine. Either they suspect that brains themselves are much scruffier than linguists admit, or they have no opinion about brains and simply want to engineer a working product.A scruffy-AI researcher may want to enrich the current system to make more use of syntax, but will be perfectly happy to use a "big hairy four-by-four" approximation of syntax that is nailed onto the rest of the system with railroad spikes. The goal is to improve the end results by any expedient method.Other researchers working on the same system may be true believers in neat AI. They really wish that the system had been designed on clean linguistic and statistical principles from the ground up. Unfortunately such systems would be hard to build and have not worked as well in the past, so these neat-AI researchers settle for helping to nail syntax onto an existing scruffy system. They feel proud of themselves for using (more) linguistics. But does this route really lead toward the utopian system they dream of? Can the hybrid system be gradually made more principled, as the old hacks are gradually phased out? Or is that just a comforting fantasy that sustains them, as it sustains Barthelme's construction workers? "The exercise of our skills, and the promise of the city, were enough."