Monday, January 30, 2023

Peter Thiel’s second thoughts about funding Eliezer Yudkowsky and friends

That’s my speculative and somewhat polemical framing of a middle passage in this video, which presents a talk Peter Thiel recently gave before the Oxford Union. He’s talking about technology stagnation and so on and so forth. About Thiel, from the YouTube description:

Peter Thiel is an American technology entrepreneur and investor. He co-founded PayPal and Palantir, made the first outside investment in Facebook, and has funded companies like LinkedIn and Yelp. Thiel also started the Thiel Foundation, which works to advance technological progress and long-term thinking via funding non-profit research into artificial intelligence, life extension, and seasteading.

At about twenty minutes in (c. 20:07) there’s a striking passage where Thiel talks about a change in attitude that took place about a decade or so into the current millennium. Note that at the beginning when Thiel mentions “getting involved in all these things” that that involvement includes early funding for The Singularity Institute for Artificial Intelligence, which became the Machine Intelligence Research Institute in 2011.

Twenty years ago when I started getting involved in all these things the narrative was still generally a positive utopian it was people thought you know it’s kind of dangerous technology you know. If you build this computer that’s as smart or smarter than any human being in the world in the it’s kind of dangerous, but we’re gonna have to work really hard to make sure it’s friendly, that it’s aligned with humans and it was still sort of circa 2003 whatever misgivings people might have had about biotech or rockets or nuclear power, they did not yet have about AI and the AI narrative was still a generally positive utopian one.

And there’s sort of a strange way where this has completely flipped over the last decade or so. I was involved [with] a thing called The Singularity Institute which pushed a sort of accelerationist utopian technology. We’re progressing, we need to progress faster. We need of course to be a little bit careful and I sort of remember thinking to myself by 2015 I reconnected so many people and it didn’t feel like they were really pushing the AI thing as fast as before and it sort of devolved into you know some kind of escapist Burning Man camp.

You sort of got the sense that it had shifted from transhumanism to Luddite, something Luddite where no actually we want to slow this down. It feels kind of dangerous. It’s kind of a bad thing on net. And this finally this suspicion I think was finally confirmed you can look up on the Internet uh I’m gonna read this. It’s from April 2022 less than a year ago. Eliezer Yudkowsky, who’s one of the sort of thought leaders of the sort of futurist AI. It’s a post from the Machine Intelligence Research Institute and it’s announcing a new “Death with Dignity” strategy and so of the short version of this:

It's obvious at this point that humanity isn't going to solve the alignment problem, or even try very hard, or even go out with much of a fight. Since survival is unattainable, we should shift the focus of our efforts to helping humanity die with with [sic] slightly more dignity.

I want to underscore you don’t deserve to die with a lot of dignity because you’re not going to “try very hard, or even go out with might of a fight.” But it is an extraordinary, it’s an extraordinary way that the context is shifted.

What happened to bring about this shift in attitude? I’m wondering if it was some a failure of nerve. 

In any event we should note that having the business acumen needed to become rich by backing high tech ventures does not imply any deep insight into the future or, for that matter, technology itself. Technology is changing so fast that one can easily become rich on technology that will be obsolete a decade from the time the ink dries on the first check you cash.

No comments:

Post a Comment