Pages in this blog

Monday, August 29, 2022

My current thoughts on AI Doom as a cult(ish) phenomenon

I’ve been working on an article in which I argue that belief in and interest in the likelihood that future AI poses an existential risk to humanity – AI Doom – that this has become the focal point of cult behavior. I’m trying to figure out how to frame the argument. Which is to say, I’m trying to figure out what kind of an argument to make.

This belief has two aspects: 1) human level artificial intelligence, aka artificial general intelligence (AGI), is inevitable, and may well, very likely will, inevitably will lead to superintelligence, and 2) this intelligence will very likely turn on us, either deliberately or inadvertently. If both of those are true, then belief in AI Doom is rational. If neither are true, then belief in AI Doom is mistaken. Cult behavior, then, is undertaken to justify and support these mistaken beliefs.

That last sentence is the argument I want to make. I don’t want to actually argue that the underlying beliefs are mistaken. I wish to assume that. That is, I assume that on this issue our ability to predict the future is dominated by WE DON’T KNOW.

Is that a reasonable assumption? That is to say, I don’t believe that those ideas are a reasonable way to confront the challenges posed by AI. And I’m wondering what motivates such maladaptive beliefs.

On the face of it such cult behavior is an attempt to assert magical control over phenomena which are beyond our capacity to control. It is an admission of helplessness.

* * * * *

I drafted the previous paragraphs on Friday and Saturday (August 26th and 27th) and then dropped it because I didn’t quite know what I was up to. Now I think I’ve figured it out.

Whether or not AGI and AI Doom are reasonable expectations for the future, that’s one issue, and it has its own set of arguments. That a certain (somewhat diffuse) group of people have taken those ideas and created a cult around them, is a different issue. And that’s the argument I want to make. In particular, I’m not arguing that those people are engaging in cult behavior as a way of arguing against AGI/AI Doom. That is, I’m not trying to discredit believers in AGI/AI Doom as a way of discrediting AGI/AI Doom. I’m quite capable to arguing against AGI without saying anything at all about the people.

As far as I know, the term “artificial general intelligence” didn’t come into use until the mid-2000s, and focused concern about rogue AI begins consolidating in the subsequent decade, well after the first two Terminator films (1984, 1991). It got a substantial boost with the publication of Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies in 2014, which introduced the (in)famous paperclip scenario, in which an AI tasked with creating as many paperclips as possible proceeds to cover the earth's surface into paperclips.

One can believe in AGI without being a member of a cult and one can fear future AI without being a member of a cult. Just when these beliefs became cultish, that’s not clear to me. Just when these beliefs became cultish, that’s not clear to me. What does that mean, became cultish? It means, or implies, that people adopt those beliefs in order to join the group. But how do we tell that that has happened? That’s tricky and I’m not sure.

* * * * *

I note, however, that the people I’m thinking about – you’ll find many of them hanging out at places like LessWrong, Astral Codex Ten, and OpenPhilanthropy – tend to treat the arrival of AGI as a natural phenomenon, like the weather, over which they have little control. Yes, they know that the technology is created by humans, many of them are their friends, they may themselves be actively involved in AI research, and, yes, they want to slow AI research and influence it in specific ways, but they nonetheless regard the emergence of AGI as inevitable. It’s ‘out there’ and is happening. And once it emerges, well, it’s likely to go rogue and threaten humanity.

The fact is the idea of AGI is vague, and, whatever it is, no one knows how to construct it. There’s substantial fear that it will emerge through scaling up of current machine learning technology. But no one really knows that or can explain how it would happen.

And that’s what strikes me as so strange about the phenomenon, the sense of helplessness. All they can do is make predictions about when AGI will emerge and sound the alarm to awaken the rest of us, but if that doesn’t work, we’re doomed.

How did this come about? Why do these people believe that? Given that so many of these people are actively involved in creating the technology – see this NYTimes article from 2018 in which Elon Musk sounds the alarm while Mark Zuckerberg dismisses his fears – one can read it as a narcissistic over-estimation of their intellectual prowess, but I’m not sure that covers it. Or perhaps I know what the words mean, but what the assertion means, I don’t understand it very well. I mean, if it’s narcissism, it’s not at all obvious to me that Zuckerberg is less narcissistic than Musk. To be sure, that’s just two people, but many in Silicon Valley share their views.

Of course, I don’t have to explain why in order to make that argument that we’re looking at cult behavior. That’s the argument I want to make. How to make it?

More later.

3 comments:

  1. I thought this was an interesting subject when you brought it up.

    Just down to the fact, my past experience of predictions about the future rests in prophecy, question I would ask what is the role of politics here?

    Looking for any comparative value I suppose.

    The subject that comes to mind is symbolic politics/ symbolic processing: theory here I have not read, is it of any value?

    ReplyDelete
  2. I'm not aware of the body of theory you mention in your last sentence.

    ReplyDelete
  3. I've not read anything here, simple aware that it exists and I really should read it. Theory of Warfare and ethnic violence by Stuart J Kaufman.

    I think he also looks at the party political system in the U.S. Ive done little other than glance at it, but it does on the surface looks potentially attractive as a comparative model.

    ReplyDelete