With all the flurry around and about ChatGPT I thought I'd bump this to the top of the queue. Check out this tweet that just rolled in:
Yes, indeed. It looks like #ChatGPT has been reading/trained on the right source material : )
— David J. Gunkel (@David_Gunkel) December 7, 2022
* * * * *
Let us assume, as some do, though I do not, that human level artificial intelligence (AGIs) are inevitable. Some are worried that those AIs will go rogue. Do these people also think about our ethical obligations to these AIs? Surely we have such, no? Are they comparable to the ethical obligations we owe real human beings? Or animals, for that matter?
If they’re not thinking about these things, why not? Back in the days when I was imagining a computer model of Shakespeare, I did so because I wanted to examine what happened as this model read a Shakespeare play. It did occur to me that, if the model were that rich, that perhaps such an inspection would constitute an invasion of privacy – though I never published such qualms.
There is a somewhat different case, that of mind uploads or whole brain emulation. In this situation individuals have their minds uploaded to some computer system where they can live as long as that system exists. Robin Hanson has written a book about this, The Age of Em (emulation), in which he goes way beyond arguing that it is possible. He explores what that world might be like. I’ve not read it, though I’ve read some reviews, and I’ve read the 1999 article which Hanson says is the seed of the book, “If Uploads Come First.” Between that article and the table of contents it’s clear that Hanson is thinking about such things.
Of course an uploaded mind IS some kind of human, with human values and sensibilities; it just exists in a different kind of physical substrate. Moreover Hanson is imagining a world of such beings in which they interact with humans. But AIs constructed from scratch would be quite different. Those worried about the alignment problem worry about what kinds of values to equip these systems with so that they’re not harmful to humans. But that’s not the problem I’m thinking about. I’m thinking about our obligations to them, our artificial children. Do these folks consider that problem? Do they even attempt to imagine a community of humans and AIs working together?
Finally, I should mention Osamu Tezuka's Astroboy stories, which he published between 1952 and 1968. One of Tezuka’s central concerns was, in effect, civil rights for robots – see my post, The Robot as Subaltern: Tezuka's Mighty Atom. He wasn’t so much worried about robot violence against humans, though there was some of that, as he was the opposite, human violence against robots. That’s a very different view of the world, no?
No comments:
Post a Comment