I’ve known about LessWrong for years, just how many years I don’t know. But I’ve been hanging out there as a participant observer since early Spring. I mostly observe, checking in (almost) every day. But I’ve made a number of comments to posts, and I’ve made some posts of my own as well.
For those who don’t know, LessWrong is an important gathering place for the Rationalist Movement, but it’s also the epicenter of thinking about AI existential risk, that is, AI Doom. That’s what interests me here.
It’s clear to me that a belief in the inevitability of AGI, that it may or even most likely will kill is, that is central to what goes on there. For myself, I tend to think that the concept of AGI is so vague as to be useless for technical purposes. And AI Doom is some kind of projective mirage. But if you could manage to strip those out, what’s left is interesting. And I suspect it would become more interesting and valuable without them. To the extent that LessWrong is the center of a cult – a word whose negative connotations out-weigh its epistemic value – it’s belief in AI Doom that’s the culprit.
Having said that, I can imagine that it took an extreme position – we’re doomed by AI unless we wake up! – to get things started. But I believe that belief has now outlived its socio-cultural usefulness, as a tactic in socio-cultural engineering (though I doubt it’s been conceived as such), and is now an impediment. I think AI Safety is now sufficiently well established that it can survive without the self-righteous apocalypticism. Of course, large complex computer systems always raise concerns of various kinds, if for no other reason than that they’re buggy. But AI Safety is more than that, and it’s more than AI ethics, but just what that “more” is, that’s not so clear. The AI Doom business just gets in the way.
However, one of the tropes that keeps coming up in the exploration of AI Doom is that a sufficiently powerful AI engine, for want of a better term, is likely to be sneaky and conniving in order to manipulate humans into serving its nefarious ends at our expense. That’s what makes these super-intelligent beings so dangerous. They can out-think us. Now, it seems to me that if they’re that capable, on the way there they must have reached a point where they are worthy of our respect and entitled to be treated with dignity.
Yet I’ve seen no talk of that at LessWrong. Perhaps it’s there, but I’ve not yet seen it. In fact, I just now searched the site on the term “robot rights” and got some hits. But not many. I suspect it’s a serious issue, but how serious, I don’t know. I do know that there is a specialized literature on the subject, but I’ve not looked into it and I don’t know how far my opportunity-cost-o-meter will allow me to do so. But Steven Spielberg’s A.I.: Artificial Intelligence certainly raises the issue (will also drowning in mimetic desires), and that film certainly owes a cultural debt to Osamu Tezuka’s Astro Boy stories (well-known to Stanley Kubrick, who initiated the film project) and to Pinocchio.
It seems at least inconsistent to treat AIs as being potentially malevolent actors without actively exploring the possibility that they can be benevolent as well. The LessWrong impulse is to inhibit AIs, to cage them in somehow (actually, there’s lots of thought about how that’s impossible). But where’s the urge to entice and attract? Why not treat AIs and robots with dignity and respect?
More later.
Further reading:
- Séb Krier, AI from Superintelligence to ChatGPT.
- Scott Aaronson, Reform AI Alignment.
No comments:
Post a Comment