Wednesday, October 5, 2022

AI-Doom – Just who is a cargo cult?

I spotted this post on LessWrong two months ago, jam-mosig, Paper reading as a Cargo Cult. It was the presence of the word “cult” in the title that caught my attention. Why? Because I think that belief in AI-Doom is ‘cult’ behavior – see my post in 3 Quarks Daily, On the Cult of AI Doom – but I’ve got reservations about the word ‘cult.’? Why? Well, I’m not clear on just what the word means, how to apply it to the world. Here’s the problem, sorta’. Every now and again I’ve seen a discussion of the difference between cult and religion in which it is observed that a religion is just a cult that has managed to achieve legitimacy. That is to say, the difference is not a matter of the beliefs, but of their social standing. This post is not a place to attempt to hash that out, but I think it’s a legitimate issue.

However, LessWrong is a place where AI Doom (that is, AI as existential risk) is a legitimate complex of beliefs. It is one of the central sites on the web for discussions of those ideas. Thus it is interesting and significant to see the word used on that site. It’s a sure indication that the issue of broader legitimacy is explicitly recognized in that world.

Note, however, that the post isn’t specifically about belief in AI Doom, but rather about the broader issue of AI alignment, where the possibility that AI presents an existential risk is only one issue among many. But it is the most extreme and distressing one.

* * * * *

Having said that, let’s take a look at the post. Here’s how the post opens:

I have come across various people (including my past self) who meet up regularly to study, e.g., alignment forum posts and discuss them. This helps people bond over their common believes, fears, and interests, which I think is good, but in no way is this ever going to lead anyone to find a solution to the alignment problem. In this post I'll reason why this doesn't help, and what I think we should do instead.

The cult

Reading good papers can be fun. You learn something interesting and, if the topic is hard but well presented by the authors, you get a kick from finally understanding something complicated. But is what you learned actually useful for the problem at hand? What is the question that drove you to read this paper in such detail?

Yes, you need to regularly skim papers for fun, so you get an idea of what's out there and where to look when you need something. You also need to absorb terminology and good writing practice, so you can communicate your own research. Yet, I believe that fun-reading should only occupy a tiny fraction of your time, as you have more important things to do (see next section).

Despite its relative unimportance, paper reading groups tend to focus a lot on this fun-reading aspect. They are more of a social gathering than a mechanism to boost progress.

The post is, in effect, distinguishing between a serious concern about AI alignment (which includes AI Doom) and a more superficial commitment. This more superficial commitment is (thus) cultlike and centers on social activity, not intellectual investigation.

After some more remarks about ‘the cult’ jam-mosig takes up the issue of “actual science”:

To drive scientific progress means to do something that nobody else has ever done before. This means that your idea or line of research tends to seem strange to others (at first sight). At the same time, it also tends to seem obvious to you - it's just the natural next step when you take seriously what you've learnt so far.

Before I properly reconcile "strange" and "obvious" here, let me warn you of a trap: It is very easy to have an idea that seems obvious to you, but strange to others, when you are delusional. Especially when you are good at arguing, you can easily make yourself believe that you are right and everybody else is just not seeing it. Beware that trap.

I find that second paragraph interesting because it outlines the epistemological problem presented by cultishness. Cult beliefs are obvious to those in the cult, but strange to others.

While there’s more to the post, though not much more, that’s enough for my purpose, which is simply to show that the issue of cultishness is real within the AI alignment community. The author of the post seems intent on showing that, while they once engaged in this cultish behavior, they’ve since moved beyond it. But those other people, over there...they’ve got to change their ways.

No comments:

Post a Comment