Geoffrey Hinton is the one who invented so-called "deep learning", the technique that allowed a computer to beat Lee Sedol, a South Korean grandmaster, at Go. Adrian Lee interviews Hinton in MacLean's:
Q: So what now? Are there other, even more complicated games that the AI world wants to conquer next?
A: From what we think of as board games and things like that, I don’t think there is—I think this is really the pinnacle. There are of course other games, these fantasy games, where you interact with characters who say things to you. AI still can’t deal with those because they still can’t deal with natural language well enough, but it’s getting much better. And the way translation’s currently done will change because Google now has what promises to be a much better way to do machine translation. That’s part of understanding natural language properly, and that’ll influence lots of things—it’ll influence fantasy games and things like that, but it will also allow you to search much better, because you’ll have a better sense of what documents mean. It’s already influencing things—in Gmail you have Smart Reply, that figures out from an email what might be a quick reply, and it gives you alternatives when it thinks they’re appropriate. They’ve done a pretty good job. You might expect it to be a big table, of ‘If the email looks like this, this is a good reply, and if the email looks like that, then his might be a good reply.’ It actually synthesizes the reply from the email. The neural net goes through the words in the email, and gets some internal state in its neurons, and then uses that internal state to generate a reply. It’s been trained in a lot of data, where it was told what the kinds of replies are, but it’s actually generating a reply, and it’s much closer to how people do language.
Q: Beyond games, then—what might come next for AI?
A: It depends who you talk to. My belief is that we’re not going to get human-level abilities until we have systems that have the same number of parameters in them as the brain. So in the brain, you have connections between the neurons called synapses, and they can change. All your knowledge is stored in those synapses. You have about 1,000-trillion synapses—10 to the 15, it’s a very big number. So that’s quite unlike the neural networks we have right now. They’re far, far smaller, the biggest ones we have right now have about a billion synapses. That’s about a million times smaller than the brain.Q: Do you dare predict a timeline for that?
A: More than five years. I refuse to say anything beyond five years because I don’t think we can see much beyond five years. And you look at these past predictions like there’s only a market in the world for five computers [as allegedly said by IBM founder Thomas Watson] and you realize it’s not a good idea to predict too far into the future.
The importance of computing power:
Q: How important is the power of computing to continued work in the deep learning field?
In deep learning, the algorithms we use now are versions of the algorithms we were developing in the 1980s, the 1990s. People were very optimistic about them, but it turns out they didn’t work too well. Now we know the reason is they didn’t work too well is that we didn’t have powerful enough computers, we didn’t have enough data sets to train them. If we want to approach the level of the human brain, we need much more computation, we need better hardware. We are much closer than we were 20 years ago, but we’re still a long way away. We’ll see something with proper common-sense reasoning.Q: Can the growth in computing continue, to allow applications of deep learning to keep expanding?
A: For the last 20 years, we’ve had exponential growth, and for the last 20 years, people have said it can’t continue. It just continues. But there are other considerations we haven’t thought of before. If you look at AlphaGo, I’m not sure of the fine details of the amount of power it was using, but I wouldn’t be surprised if it was using hundreds of kilowatts of power to do the computation. Lee Sedong was probably using about 30 watts, that’s about what the brain takes, it’s comparable to a light bulb. So hardware will be crucial to making much bigger neural networks, and it’s my guess we’ll need much bigger neural networks to get high-quality common sense.Q: In the ’80s, scientists in the AI field dismissed deep learning and neural networks. What changed?
H/t Tyler Cowen.A: Mainly the fact that it worked. At the time, it didn’t solve big practical AI problems, it didn’t replace the existing technology. But in 2009, in Toronto, we developed a neural network for speech recognition that was slightly better than the existing technology, and that was important, because the existing technology had 30 years of a lot of people making it work very well, and a couple grad students in my lab developed something better in a few months. It became obvious to the smart people at that point that this technology was going to wipe out the existing one.
No comments:
Post a Comment