Pages in this blog

Wednesday, June 1, 2022

Interesting interview with Geoffrey Hinton [+follow-up Q&A]

The Robot Brains Podcast

Season 2 Ep 22 Geoff Hinton on revolutionizing artificial intelligence... again

240 views Jun 1, 2022 Over the past ten years, AI has experienced breakthrough after breakthrough in everything from computer vision to speech recognition, protein folding prediction, and so much more.

Many of these advancements hinge on the deep learning work conducted by our guest, Geoff Hinton, who has fundamentally changed the focus and direction of the field. A recipient of the Turing Award, the equivalent of the Nobel prize for computer science, he has over half a million citations of his work.

Hinton has spent about half a century on deep learning, most of the time researching in relative obscurity. But that all changed in 2012 when Hinton and his students showed deep learning is better at image recognition than any other approaches to computer vision, and by a very large margin. That result, that moment, known as the ImageNet moment, changed the whole AI field. Pretty much everyone dropped what they had been doing and switched to deep learning.

Geoff joins Pieter in our two-part season finale for a wide-ranging discussion inspired by insights gleaned from Hinton’s journey from academia to Google Brain. The episode covers how existing neural networks and backpropagation models operate differently than how the brain actually works; the purpose of sleep; and why it’s better to grow our computers than manufacture them.

What's in this episode:

00:00:00 - Introduction
00:02:48 - Understanding how the brain works
00:06:59 - Why we need unsupervised local objective functions
00:09:39 - Mass auto-encoders
00:10:55 - Current methods in end to end learning
00:18:36 - Spiking neural networks
00:23:00 - Leveraging spike times
00:29:55 - The story behind AlexNet
00:36:15 - Transition from pure academia to Google
00:40:23 - The secret auction of Hinton’s company at NIPS
00:44:18 - Hinton’s start in psychology and carpentry
00:54:34 - Why computers should be grown rather than manufactured
01:06:57 - The function of sleep and Boltzmann Machines
01:11:49 - Need for negative data
01:19:35 - Visualizing data using t-SNE

Links:
Geoff's Bio: https://en.wikipedia.org/wiki/Geoffrey_Hinton
Geoff's Twitter: https://twitter.com/geoffreyhinton?la...
Research and Publications: https://bit.ly/3z3M54e
Google Scholar Citations: https://bit.ly/3N892HJ
Story Behind the 2012 NIPS Auction: https://bit.ly/3t9xsIN
GLOM: https://bit.ly/3lYgWr6
Vector Institute: https://vectorinstitute.ai/

Follow-up Q/A to the previous video:

 

403 views Jun 8, 2022 Last week, we were honored to have Professor Geoff Hinton join the show for a wide-ranging discussion inspired by insights gleaned from Geoff's journey in academia, as well as past 10 years with Google Brain. The episode covers how existing neural networks and backpropagation models operate differently than how the brain actually works; the ImageNet/AlexNet breakthrough moment; the purpose of sleep; and why it’s better to grow our computers than manufacture them.

As you might recall, we also gave our audience an opportunity to contribute questions for Geoff via Twitter. We received so many amazing questions from our audience that we had to break down our time with Geoff into two parts! In this episode, we’ll discuss some of these questions with Geoff.

Tune in to get Geoff’s answers to the following questions AND MORE:

Are you concerned with AI becoming too successful?
What is the connection between mania and genius?
What childhood experiences shaped him the most?
What is next in AI?
What should PhD students focus on?
How conscious do you think today's neural nets are?
How important is embodiment for intelligence?
How does the brain work?

No comments:

Post a Comment