Joel Achenbach has an article in The Washington Post about AI voodoo, The AI Anxiety. He’s properly skeptical about the possibility of superintelligence, as I am. But he starts out, as is reasonable in such articles, by stating some of the ideas of Nick Bostrom, whom he dubs “the world’s spookiest philosopher”. Here’s one of those ideas:
Bostrom’s favorite apocalyptic hypothetical involves a machine that has been programmed to make paper clips (although any mundane product will do). This machine keeps getting smarter and more powerful, but never develops human values. It achieves “superintelligence.” It begins to convert all kinds of ordinary materials into paper clips. Eventually it decides to turn everything on Earth — including the human race (!!!) — into paper clips.
I’ve read this scenario before, though I’ve not read Bostrom’s own presentation.
This time around I wondered: In what universe is a machine that does such a boneheaded thing worthy of being called intelligent? Yes, I read that loophole, “but never develops human values”, but in what sort of value system could that action be reasonable? We start out with an “intelligent” machine programmed to crank out paperclips. It gets smarter and smarter, etc. So it has to make lots of decisions; it’s got to have decision procedures of some kind. Those procedures embody its “values”. What sort of decision procedures would, on the one hand, allow this machine to procure the resources and construct the devices necessary to make more and more paperclips, and, on the other hand, not realize that making paperclips, and only paperclips, is stupid and so not worthy of its intelligence?
It doesn’t make sense. Why would anyone dream up such malarkey? It’s like those Nigerian letter scams, the ones that inform you that millions of dollars are waiting for you, just for you, in an abandoned account in The Royal Democratic AIPost-Colonial Bank in Lagos and all you have to do is fork over your information. Such letters are obviously intended to weed out anyone with a lick of sense so that the scammers don’t waste time dealing with reasonable people. Is that why Bostrom offers up such examples?
Later in the article Achenbach gives two more examples of Bostrom’s wisdom:
Imagine, Bostrom says, that human engineers programmed the machines to never harm humans — an echo of the first of Asimov’s robot laws. But the machines might decide that the best way to obey the harm-no-humans command would be to prevent any humans from ever being born.Or imagine, Bostrom says, that superintelligent machines are programmed to ensure that whatever they do will make humans smile. They may then decide that they should implant electrodes into the facial muscles of all people to keep us smiling.
Really? Does Bostrom really think that machines with the capacity to do such things would nonetheless be unable to realize how stupid those things are?
It doesn’t make sense.
No comments:
Post a Comment