Pages in this blog

Tuesday, October 15, 2019

AI can't do common sense [I intersperse links to my own work where it's relevant]

Sean Carroll interviews Melanie Mitchell:
0:08:34 SC [Sean Carroll]: Well, I’ve noticed this, and there are people who are not professional AI researchers, but are famous public intellectuals, Stephen Hawking, Elon Musk, Bill Gates, people like that, who warn against AI coming in the future and being super intelligent and taking over. And they get a lot of pushback from people who spend their lives doing AI. But I don’t find the pushback especially convincing, because you can be so immersed in the day-to-day difficulties that you miss the possibility of something big going on. So, that’s what you’re trying to sort of step back and ask about in the book.

0:09:06 MM [Melanie Mitchell]: Exactly. Yeah. And what’s interesting about AI, which might be different from your field, I don’t know, but everybody has an opinion about it. Everybody has a “informed opinion” about it. Everybody thinks they know what intelligence is, and how hard it is for AI to get it. And we get a lot of people who are not in the field who have never worked in this area opining [chuckle] with great confidence about the future, which is very strange.

0:09:34 MM: And I think most, as you say, most people who are kind of on the ground working day-to-day don’t agree with a lot of these more over-the-top predictions. But if you take someone like Ray Kurzweil, okay, who is the… Sort of promotes the idea of the singularity. Where he thinks that AI is going to become at the level of humans within the next 10 years. And then a billion times more intelligent than humans within 20 years or 30 years, or something. He would say, “Well, if you’re sitting on an exponential curve,” and that’s his idea that progress in AI is on an exponential curve, “if you’re sitting in the middle of an exponential curve, it doesn’t look exponential to you. But it’s about to get very, very steep.” And…

0:10:28 SC: The nice thing about sitting on curves that don’t look exponential is some of them might not be exponential.
Yeah, some of those exponential curves plateau out.

I didn't exactly work in AI, but I did doctoral work in a neighboring area, computational linguistics. I worked on Coleridge's "Kubla Khan" – tried to develop a semantic model for it and failed – and Shakespeare's Sonnet 129, where I did come up with at least part of a model. And as far as I know, I'm the only one who's worked on real poems from a near AI perspective. And I'm not worried that AI's going to blow up into super-intelligence any time soon. Also, I know a bit about the brain, aka real neural networks, and consulted with the late Walter Freeman while working on my book about music, Beethoven's Anvil. I know better.

A bit later:
0:21:56 MM: That’s right. There’s a lot of differences between neural networks in the brain. Most of the most successful neural networks people use today are very loosely based on the way the visual system works, at least as of 1950, the understanding… [laughter] And they’re called convolutional neural networks. So, I don’t know, some people in your audience probably have heard of these.

0:22:21 SC: So what does that mean? What does convolutional mean in this context?

0:22:24 MM: So the idea here is that, if you look in the visual system, each neuron, say, the lowest layers of the visual system. The visual system is organized into layers. The lowest layer, each neuron very roughly, this is a very rough approximation, has input from, say, the retina, and it’s looking out of the visual scene, and it’s sensitive to a particular part of the visual scene. A small part.

0:22:54 SC: Yeah.

0:22:55 MM: And what it does is very roughly equivalent to a mathematical operation called the convolution where it essentially multiplies its weights of its inputs times the input values in this small area and sums them up.
Ah yes, convolution. Karl Pribram looked into hologram's as models for the operation of visual perception, and I followed his work carefully. David Hays and I made use of it (and hence of the idea of convolution) in proposing an account of metaphor. It was an informal proposal, we didn't do the math. I later made use of the idea, again informally, in proposing – with hand-waving and tap-dancing – a solution to my old "Kubla Khan" problem. So, that's 45 years from the time I posed the problem of 'calculating' meaning in "Kubla Khan" back in 2012 until I proposed an approach to a solution in 2017. Not bad. From, say, sometime in the 80s up through 2016 I didn't think that I'd ever see as much as a proposal for a solution. So I hit upon a gift in 2017.

But I still don't think super-intelligence is just around the corner. Let's move on with the interview. Somewhat later, talking about using neural nets for perceptual processing:
0:26:30 MM: ... It’s not obvious when you’re looking at these networks. They perform really well. Think of a face recognition neural network. You train it on millions of faces and now it’s able to recognize faces, certain people. We humans look at it and say, “Oh, it’s good at recognizing faces.” But you don’t actually know for sure that that’s what it’s recognizing. It may be that there’s some aspect of the data, some, as you say, texture or something in the data that it’s very sensitive to, that happens to be statistically associated with certain faces. And so, it’s not necessarily learning to recognize the way we recognize, the way humans recognize. It may be something completely different that may not generalize well.

0:27:32 SC: Yeah, when the context changes, that correlation might completely go away.

0:27:36 MM: And that’s something that people have found with these neural networks, is that, not only are we able to fool them, but even if we’re not trying to fool them, certain small changes in the data that they’re given that they’re slightly different in certain ways from the input will cause them to fail. One recent experiment, they trained a neural network to recognize fire engines, fire trucks, right? Then they photoshopped images of fire trucks in weird positions in the image. Upside down, sideways, in the sky. And the network…

0:28:14 SC: Had no idea.

0:28:15 MM: Completely misclassified it, even though a human would be able…

0:28:18 SC: Yeah.

0:28:19 MM: Would recognize them. So then, when we say we’ve trained them to recognize fire trucks, it’s not totally clear what we’ve actually trained them to recognize. That’s a little bit of a difficulty in neural nets.

0:28:30 SC: Well, and this is one of the reasons why some of the most impressive successes of AI programs have been in very well-defined, finite situations like games, right? Like chess and go, and so forth.

0:28:40 MM: Yes.
Self-driving cars?
0:31:55 MM: The real world is different from simulation. It’s different from all experimental techniques. It’s… Self-driving cars turned out to be a lot harder than people thought just like a lot of things in AI. And the reason seems to be that there are so many different possible things that can happen. And I think this is true in most of life, not just driving. But most of the time, you’re driving along in your… Say, you’re on the highway and there’s cars in front of you there’s cars in the back of you, and nothing much is happening. But occasionally, something unexpected happens, like a fire engine turns on its siren and starts coming by. Or there is a tumble weed in the road. I spent a lot of time in New Mexico.

0:32:55 SC: Yeah. [laughter] And even though these events are unlikely, they’re crucially important that we get them right, right?

0:33:00 MM: Yes. And one of the problems with self-driving cars, I’ve been told, nowadays, is that they perceive obstacles all the time, even when there’s no obstacle or human wouldn’t consider the thing an obstacle. And so, they put on the breaks quite a bit.
And then we have Google's Deep Mind group doing some really super work on a video game called Breakout.
0:38:35 MM: Breakout. Yeah, it’s a fun Atari game. So they taught… They used reinforcement learning just like in AlphaGo to teach the machine how to play Breakout. It only learned from pixels on the screen. It didn’t have any Notion built into it.

0:38:50 SC: So it doesn’t think, “Oh, I have a paddle. There’s a ball, there’s a wall. It just… “

0:38:53 MM: No. But it learned to play at a superhuman level. But then another group did an experiment where they took the paddle and they moved it up two pixels. Now the program could not play the game at all because it hadn’t abstracted the notion of a paddle as an object. It’s just a bunch of pixels. It was as if we would see the world and not see objects.

0:39:17 SC: But what’s the lesson there? Is there a way that we can… Is there a more efficient way of teaching the computer how to think about the world to give it some common sense?

0:39:26 MM: We may have to build some things in. It may be that things are built into our brain by evolution.
Whoops!

And so on through deep fakes, chatbots, language understanding, the Turing test, racial bias in facial recognition algorithms, just what is superintelligence anyhow?, is AI intrinsically limited, abstract concepts, analogies, meta-cognition and so forth. Winding up:
1:12:43 SC: Alright, just to close up, then let’s let her hair down and it’s the end of the podcast and prognosticate about the future a little bit, I know that it’s always very hard and I promise not to hold any bad prediction you make against you 50 years from now but… Just as one data point, what do you think the landscape will look like 50 years from now in terms of AI in terms of how general-purpose it will be, how much common sense it will have, how close it will come to being humanly intelligent?

1:13:12 MM: 50 years from now, wow.

1:13:14 SC: You can change it to another time, but I think 50 years is good because on the one hand, we’ll probably be dead.

1:13:19 MM: Yeah, will the world last that long?

1:13:21 SC: On the other hand, like you can… Maybe you can accurately guess 10 years from now, but no one can guess accurately 50 years from now, so.

1:13:27 MM: Yeah. Yeah. Well, I can imagine that we would have much better chatbots that can do… The deep fake stuff would be incredibly good which is terrifying.
And things sputter on from there for six more minutes. No predictions. [Smart]

H/t 3QD.

No comments:

Post a Comment