Pages in this blog

Sunday, January 1, 2023

Sabine Hossenfelder on the possibility of human extinction

Correction to what I say at 11 mins 50 seconds: A supervolcano eruption ejects more than 1000 cubic kilometers of matter (not 1000 cubic meters). Sorry about that!

What do we know about the risks of human going extinct? In today's video I collect what we know about the frequency of natural disasters and just how they would kill us, and estimates for man-made disasters.

00:00 Intro
00:30 What Is an Existential Risk?
02:00 Would Extinction be Bad?
04:18 Man-made Disasters
10:36 What's The Risk of Man-made Disasters?
11:35 Natural Disasters
13:38 What's the Risk of Natural Disasters?
16:55 Why Can't the LHC Produce Black Holes? 19:29 Protect Your Data with NordVPN

Many thanks to Jordi Busqué for helping with this video http://jordibusque.com/

What about rogue AIs?

9:01: The biggest problem with Artificial Intelligence that longtermists see is that that an AI could become intelligent enough to survive independently of us but pursue interests that conflict with our own. They call it the “misalignment problem”. In the worst case, the AIs might decide to get rid of us. And could we really blame them? I mean, most of us can’t draw a flower, let alone a human face, so what’s the point of our existence really?

This wouldn’t necessarily be an extinction event in the sense that intelligent life   would still exist, it just wouldn’t be us. Under which circumstances you might consider an AI species a continuation of our own line is rather unclear. Longtermists argue it depends on whether the AI continues our “values”, but it seems odd to me to define a species by its values, and I’m not sure our values are all that great to begin with.

In any case, I consider this scenario unlikely because it assumes that advanced AIs will soon be easy to build and reproduce which is far from reality. If you look at what’s currently happening, supercomputers are getting bigger and bigger, and the bigger they get the more difficult they are to maintain and the longer it takes to train them. If you extrapolate the current trend to the next few hundred years, we will at best have a few intelligent machines owned by companies or governments, and each will require a big crew to keep it alive. They won’t take over the world any time soon.

The biggest existential risk?

19:23: Okay, so in summary, the biggest existential risk is our own stupidity.

2 comments:

  1. In response to the biggest existential risk: some things never change. . .

    ReplyDelete