And that requires common sense, among other things.
Right now, for example, if a soldier asks an AI system like a target identification platform to explain its selection, it can only provide the confidence estimate for its decision, DARPA’s director Steven Walker told reporters after a speech announcing the new investment – an estimate often given in percentage terms, as in the fractional likelihood that an object the system has singled out is actually what the operator was looking for.
“What we’re trying to do with explainable AI is have the machine tell the human ‘here’s the answer, and here’s why I think this is the right answer’ and explain to the human being how it got to that answer,” Walker said.DARPA officials have been opaque about exactly how its newly-financed research will result in computers being able to explain key decisions to humans on the battlefield, amidst all the clamor and urgency of a conflict, but the officials said that being able to do so is critical to AI’s future in the military.Vaulting over that hurdle, by explaining AI reasoning to operators in real time, could be a major challenge. Human decision-making and rationality depend on a lot more than just following rules, which machines are good at. It takes years for humans to build a moral compass and commonsense thinking abilities, characteristics that technologists are still struggling to design into digital machines.“We probably need some gigantic Manhattan Project to create an AI system that has the competence of a three year old,” Ron Brachman, who spent three years managing DARPA’s AI programs ending in 2005, said earlier during the DARPA conference. “We’ve had expert systems in the past, we’ve had very robust robotic systems to a degree, we know how to recognize images in giant databases of photographs, but the aggregate, including what people have called commonsense from time to time, it’s still quite elusive in the field.”
No comments:
Post a Comment