Pages in this blog

Saturday, September 3, 2022

Rogue AI & the relationship between reality and community orientations

Time once again to chew the AI-Doom bone.

Rohit over at Strange Loop Canon has recent posts about AI risk is modern eschatology and Effective Altruism is a constant fight against Knightian uncertainty. Since AI existential risk (x-risk) is one of EA’s target concerns they are closely related topics. I commented on both. Here’s my comment on EA:

I've spent more time thinking about AI x-risk than about EA in general. But of course they're closely related, as AI x-risk is one of the causes embraced by EA. It's my understanding that EA didn't start out with a focus on long-termism. That emerged.

The problem, as your title indicates, is that we're dealing with radical uncertainty. In the case of AI x-risk the fundamental problem is we don't know how to think about AGI in terms of mechanisms, as opposed to FOOM-like magic. The AI x-risk people respond by creating these elaborate predictive contraptions around something where meaningful quantitative reasoning is impossible. You're arguing that the EA folks are doing this as well.

Why?

At some point it seems to me that the mechanisms of community have overwhelmed the objectives the community was created to address. So now those objectives function as a reason for engaging in this elaborate ritual intellection. The community is now more engaged in elaborating its rituals than in dealing with the world. How does that happen and why?

We've got community orientation (CO) and reality orientation (RO). CO should be subordinate to RO and should serve it. What has happened is that RO has become subordinate to CO. Put your old McKinsey hat on: How do you measure CO and RO of a group and plot their evolution over time? What's going on at the tipping point where CO surpasses RO? I think that happened in the AI x-risk space at about the time Bostrom published Superintelligence.

It’s that last paragraph that has my current attention.

The point is that we’ve got two interacting forces or socio-cultural dynamics, which I called community orientation (CO) and reality orientation (RO). At this point those are little more than names for things I’ve not specified in any (empirical) detail. That’s not going to change anytime soon.

What I achieve with that crude conceptual hack is a way of thinking about the dynamics of a group of people having a common interest. At some point those dynamics undergo a change with the result being an intellectually closed community organized around fear of x-risk posed by AI. In fact, I’m tempted to say that these people didn’t come to function AS a community until their attention was focused on AI x-risk. Prior to that they were just a bunch of people interested in AI, AGI in particular, who may also have been interested in what would happen if, upon their emergence, those engines went rogue. Once THAT had their attention, they ‘closed ranks’ with one another and became a community.

More later.

No comments:

Post a Comment