Nor do I. I find his reasoning convincing. Here's a chunk:
I don’t see why I should be much more worried about your losing control of your firm, or army, to an AI than to a human or group of humans. And liability insurance also seems a sufficient answer to your possibly losing control of an AI driving your car or plane. Furthermore, I don’t see why its worth putting much effort into planning how to control AIs far in advance of seeing much detail about how AIs actually do concrete tasks where loss of control matters. Knowing such detail has usually been the key to controlling past systems, and money invested now, instead of spent on analysis now, gives us far more money to spend on analysis later.
All of the above has been based on assuming that AI will be similar to past techs in how it diffuses and advances. Some say that AI might be different, just because, hey, anything might be different. Others, like my ex-co-blogger Eliezer Yudkowsky, and Nick Bostrom in his book Superintelligence, say more about why they expect advances at the scope of AGI to be far more lumpy than we’ve seen for most techs.
Yudkowsky paints a “foom” picture of a world full of familiar weak stupid slowly improving computers, until suddenly and unexpectedly a single super-smart un-controlled AGI with very powerful general abilities appears and is able to decisively overwhelm all other powers on Earth. Alternatively, he claims (quite implausibly I think) that all AGIs naturally coordinate to merge into a single system to defeat competition-based checks and balances.
These folks seem to envision a few key discrete breakthrough insights that allow the first team that finds them to suddenly catapult their AI into abilities far beyond all other then-current systems. These would be big breakthroughs relative to the broad category of “mental tasks”, and thus even bigger than if we found big breakthroughs relative to the less broad tech categories of “energy”, “transport”, or “shelter”. Yes of course change is often lumpy if we look at small tech scopes, but lumpy local changes aggregate into smoother change over wider scopes.
As I’ve previously explained at length, that seems to me to postulate a quite unusual lumpiness relative to the history we’ve seen for innovation in general, and more particularly for tools, computers, AI, and even machine learning. And this seems to postulate much more of a lumpy conceptual essence to “betterness” than I find plausible. Recent machine learning systems today seem relatively close to each other in their abilities, are gradually improving, and none seem remotely inclined to mount a coup.
It's worth noting that Hanson once blogged with Yudkowsky so, presumably, he understands his worldview. Which is to say, he's closer to the worldview of the Al Alignment folks than I am. But, still, he finds their fear of future AI to be unfounded.
There's more at the link.
Addendum, 6.27.22: My reply to Hanson:
"...why are AI risk efforts a priority now?"
In the first place they have more to do with the "Monsters from the Id" in the 1956 film, Forbidden Planet, than they have to do with a rational assessment of the world. It's conspiracy theory directed at a class of objects no one knows how to build, though obviously many are trying to build them.
As for Yudkowsky I have made several attempts to read a long article he published in 2007, Levels of Organization in General Intelligence. I just can't bring myself to finish it. Why? His thinking represents the triumph of intension over extension.
As you know, philosophers and logicians distinguish between the intension of a concept or a set and its extension. Its intension is its definition. Its extension is its footprint in the world, in the case of a set, the objects that are members of the set. Yudkowsky builds these elaborate contraptions from intensions with only scant attention to the possible or likely extensions of his ideas. He’s building castles in air. There’s little there but his prose.
Thinking about AI risk seems like this as well. Why the recent upswing in these particular irrational fears? Does it track the general rise in conspiracy thinking in this country, especially since January 6th? There's no particular reason that it should, but we're not looking for rational considerations. We're tracking the manifestations of general free-floating anxiety. This particular community is intellectually sophisticated, so such anxiety finds expression in a very sophisticated vehicle.
And our culture has reasonably rich resources for creatures born in human desire coming back to haunt his. Forbidden Planet, after all, was loosely based on Shakespeare's The Tempest. In between the two we have Dr. Frankenstein's monster. And so forth. So these intellectually sophisticated folks decided to tap into that.
Now, do I actually believe that? Well, that's tricky. I don't disbelieve it. I think it's possible, even plausible, but sure, I'd like to see a tighter argument.
Here's another question. As far as I can tell, this fear is mostly American based, with some outposts in the UK. But the Japanese, though they have very sophisticated computer technology, don't seem to worry much about rogue AI. Why not? I"m suggesting, of course, is that this fear arises in a particular cultural context and that context is not universal. I'd love to see international survey data on this.
No comments:
Post a Comment