In 'old school' symbolic AI, the system reasoned using rules and structures 'hand-coded' by humans and often derived from protocols where human experts would recorded their reasoning about something. In the current regime of machine learning, the system is programmed with a general strategy for learning from examples. Just how it leans to classify those examples, its internal strategies, that's NOT programmed, and it's not easy to open the system up and examine its strategies. Hence, it's sometimes referred to as 'black box' AI. We know what goes in and we know what comes out, but what happens in between, that's a mystery.
Do we really want medical diagnosis, and so forth, being done by black box machines? That's the question. Writing in the NYTimes, Vijay Pande, a general partner in Andreessen Horowitz, suggests our fears are exaggerated:
But we make decisions in areas that we don’t fully understand every day — often very successfully — from the predicted economic impacts of policies to weather forecasts to the ways in which we approach much of science in the first place. We either oversimplify things or accept that they’re too complex for us to break down linearly, let alone explain fully. It’s just like the black box of A.I.: Human intelligence can reason and make arguments for a given conclusion, but it can’t explain the complex, underlying basis for how we arrived at a particular conclusion. Think of what happens when a couple get divorced because of one stated cause — say, infidelity — when in reality there’s an entire unseen universe of intertwined causes, forces and events that contributed to that outcome. Why did they choose to split up when another couple in a similar situation didn’t? Even those in the relationship can’t fully explain it. It’s a black box.The irony is that compared with human intelligence, A.I. is actually the more transparent of intelligences. Unlike the human mind, A.I. can — and should — be interrogated and interpreted. Like the ability to audit and refine models and expose knowledge gaps in deep neural nets and the debugging tools that will inevitably be built and the potential ability to augment human intelligence via brain-computer interfaces, there are many technologies that could help interpret artificial intelligence in a way we can’t interpret the human brain. In the process, we may even learn more about how human intelligence itself works.
No comments:
Post a Comment