I've seen one prior episode of this podcast, Bankless, and episode in which Eliezer Yudkowsky scared the daylights out of the hosts when he laid out his standard story of AI doom, something they obviously weren't expecting. These guys know about crypto, but not about AI, so they got blindsided. In this episode they talk with Robin Hanson, who patiently pours water on Yudkowsky's version of the future with AI.
Hanson is quite familiar with Yudkowsky's thinking, having been a blogger partner of his a decade ago and having debated with him many times. Hanson has his own strange vision of the future – in which we upload out minds to machines and these machines proliferate, and then there's his "grabby" aliens stuff – so there's a lot of mind-boggling going on. He gives the substance of a recent blog post, Most AI Fear Is Future Fear, in which he makes the point the human culture is ever changing and will continue to do so in the future.
I don't necessarily endorse any of this discussion in particular, though I do think AI doom is more mythical than real, but present this podcast as an example of "future shock," to borrow a phrase from Alvin Toffler. The process of assimilating AI is going to force us to rethink a lot of STUFF from top to bottom. This will be a tricky process and will take generations to accomplish.
We're Not Going to Die: Why Eliezer Yudkowsky is Wrong with Robin Hanson
From the YouTube show notes:
In this highly anticipated sequel to our 1st AI conversation with Eliezer Yudkowsky, we bring you a thought-provoking discussion with Robin Hanson, a professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University.
In this episode, we explore:
- Why Robin believes Eliezer is wrong and that we're not all going to die from an AI takeover. But will we potentially become their pets instead?
- The possibility of a civil war between multiple AIs and why it's more likely than being dominated by a single superintelligent AI.
- Robin's concerns about the regulation of AI and why he believes it's a greater threat than AI itself.
- A fascinating analogy: why Robin thinks alien civilizations might spread like cancer?
- Finally, we dive into the world of crypto and explore Robin's views on this rapidly evolving technology.
Whether you're an AI enthusiast, a crypto advocate, or just someone intrigued by the big-picture questions about humanity and its prospects, this episode is one you won't want to miss.
Topics Covered
0:00 Intro
8:42 How Robin is Weird
10:00 Are We All Going to Die?
13:50 Eliezer’s Assumption
25:00 Intelligence, Humans, & Evolution
27:31 Eliezer Counter Point
32:00 Acceleration of Change
33:18 Comparing & Contrasting Eliezer’s Argument
35:45 A New Life Form
44:24 AI Improving Itself
47:04 Self Interested Acting Agent
49:56 Human Displacement?
55:56 Many AIs
1:00:18 Humans vs. Robots
1:04:14 Pause or Continue AI Innovation?
1:10:52 Quiet Civilization
1:14:28 Grabby Aliens
1:19:55 Are Humans Grabby?
1:27:29 Grabby Aliens Explained
1:36:16 Cancer
1:40:00 Robin’s Thoughts on Crypto
1:42:20 Closing & Disclaimers
No comments:
Post a Comment