Wednesday, December 19, 2018

Kahneman on AI, machine learning and common sense, and sunk costs

COWEN: Do you side with the analysts, such as Martin Ford, who see really a very large number of jobs being potentially automatable with artificial intelligence, machine learning? Or will we always need the human beings to work with the machines?

KAHNEMAN: That we will need human beings is, I think, an illusion. Take chess for example. Kasparov was beaten 20 years ago, and he went on for a while — and it was true for a while — saying the teams of chess players with grand masters — of programs with grand masters would be stronger than either. And it was true for a while. It is true no longer. The programs do not need the grand masters.

You know how it happened, and it’s likely to happen in many other fields. It’s happening in dermatology. The diagnosis is now better done by programs than by people, and they are not going to need the person very often. That is, to have a person intervene, with the right to intervene, they will sometimes correct mistakes. But they will more often, I think, introduce mistakes. So when you have a well-running program, leave it alone.

COWEN: So we as professors won’t need to grade exams anymore, and I don’t just mean multiple choice. You run machine learning on papers, you find what correlates with a good paper, you put the paper through the program.

KAHNEMAN: Look, the point is, there is so much noise in essay grading that it’s quite easy to imagine a program that would look at various indices and that would do better than hurried and tired professors.

COWEN: If you consider people working in psychology or maybe economics or just social sciences, do you think people persist with their professional and research projects too long or not long enough? Where’s the bias?

KAHNEMAN: My guess is too long, but it’s a personal bias.

COWEN: Because of sunk costs.

KAHNEMAN: Because of sunk costs. I think sunk cost is really the enemy when you’re doing research, innovative research. You’re to recognize that something isn’t working and just move on. And there are different views on that, but my sense is that this is the direction of the bias, yeah, sunk costs. [...] Sunk cost is a fairly specific thing. It is that you’re putting a different value on a move or an investment that you make because of investment that you have already made than you would if you were looking at that de novo.

Sunk costs, by and large, I think, are a negative.
Local optima and commonsense:
KAHNEMAN: But that AI is developing faster than anybody could have anticipated — no question. And if it continues to develop at that rate, meaning a lot faster than we expect, then things are going to happen relatively quickly.

COWEN: What do you think are the main obstacles? Some people in Silicon Valley will argue AI is stuck at a kind of local optimum. Driverless cars — although they’re ahead of the pace we thought 10 years ago, they may be behind the pace we thought 2 years ago. There’s always a problem with emergency situations, the policeman waving you on. The last 1 percent maybe is very, very difficult.

KAHNEMAN: Yeah. But I can’t evaluate that. That’s a technical problem — how long it will take to get the cleanup, the last 1 percent. The questions that are of interest as a psychologist is, when can you simulate common sense? There is the really serious question that people raise about computers, whether they know what they’re talking about, whether they understand what they’re talking about.

Without sense or whims, and without the perceptual apparatus that we have and the ability to cause things by acting on the world, they can’t be exactly like us. But that sense of understanding . . . nobody actually today would, I think, claim that even the most sophisticated programs have it.

COWEN: Do you think we’ve learned anything general about common sense by having some artificial intelligence?

KAHNEMAN: What we have learned is that our basic ideas about what’s difficult and what’s easy, what’s going to be simple and what’s going . . . have undergone a series of revolutionary changes.

We used to think that perception would be easy, and thinking would be difficult. It turns out that thinking was relatively easy and perception was difficult. Now, there are ways of handling perceptual problems, and so thinking is difficult again. And it’s a very interesting developing thing.

No comments:

Post a Comment