Pages in this blog

Thursday, May 17, 2018

Intelligence requires causal reasoning, which is more than fancy curve fitting

Kevin Hartnett interviews AI pioneer Judea Pearl on occasion of his new book, The Book of Why (Quanta, May 15, 2017): To Build Truly Intelligent Machines, Teach Them Cause and Effect. Pearl is skeptical that current techniques are the basis of a viable way ahead. Here’s a portion of that interview, followed a brief comment of my own.

* * * * *

Pearl: I can give you an example. All the machine-learning work that we see today is conducted in diagnostic mode — say, labeling objects as “cat” or “tiger.” They don’t care about intervention; they just want to recognize an object and to predict how it’s going to evolve in time.

I felt an apostate when I developed powerful tools for prediction and diagnosis knowing already that this is merely the tip of human intelligence. If we want machines to reason about interventions (“What if we ban cigarettes?”) and introspection (“What if I had finished high school?”), we must invoke causal models. Associations are not enough — and this is a mathematical fact, not opinion.

Hartnett: People are excited about the possibilities for AI. You’re not?

Pearl: As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial.

Hartnett: The way you talk about curve fitting, it sounds like you’re not very impressed with machine learning.

Pearl: No, I’m very impressed, because we did not expect that so many problems could be solved by pure curve fitting. It turns out they can. But I’m asking about the future — what next? ...

Hartnett: What are the prospects for having machines that share our intuition about cause and effect?

Pearl: We have to equip machines with a model of the environment. If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans.

The next step will be that machines will postulate such models on their own and will verify and refine them based on empirical evidence. That is what happened to science; we started with a geocentric model, with circles and epicycles, and ended up with a heliocentric model with its ellipses.

Robots, too, will communicate with each other and will translate this hypothetical world, this wild world, of metaphorical models.

Hartnett: When you share these ideas with people working in AI today, how do they react?

Pearl: AI is currently split. First, there are those who are intoxicated by the success of machine learning and deep learning and neural nets. They don’t understand what I’m talking about. They want to continue to fit curves. But when you talk to people who have done any work in AI outside statistical learning, they get it immediately. I have read several papers written in the past two months about the limitations of machine learning.

Hartnett: Are you suggesting there’s a trend developing away from machine learning?

Pearl: Not a trend, but a serious soul-searching effort that involves asking: Where are we going? What’s the next step?

* * * * *

Just what does it mean, “to equip machines with a model of the environment”? Are we talking about a 3D model like those used for video games and special effects CGI in movies? Probably not, but what? And isn’t there a sense in which we build a model of the world through our perceptual and cognitive activity? Why don’t we just program that perceptual/cognitive model (and call it common sense)?

1 comment:

  1. Judea Pearl is a man??!!

    I had him as Judith Pearl

    no matter he's in one of those textbooks

    --

    brings me to a whole essay on names

    Leslie?? is that a man or a woman - can be both as I learned (Leslie Lamport)

    basically -a is for female add -o is for man / opend/closed

    you wouldnt for a century call a man Anna in our culture!!

    but I think there are a lot of male Anas in Seoul

    possibly this -a signifies as a looking up to a higher/authority etc

    Bobo is a man for sure

    JoJo?? I dont know sounds playfull

    ReplyDelete