Wednesday, March 31, 2021

Ted Chiang’s interesting thoughts about AI [with which I have some disagreement]


Ezra Klein recently intereviewed Ted Chiang (March 30, 2021), perhaps best-known as the author of the story on which Arrival is based. They talked about various things, including AI. I’ve inserted number into the passages below that are keyed to some initial and provisional comments from me.

* * * * *

Ezra Klein: We’re spending billions to invent artificial intelligence. At what point is a computer program responsible for its own actions?

Ted Chiang: Well, in terms of at what point does that happen, it’s unclear, but it’s a very long ways from us right now. With regard to the question of, will we create machines that are moral agents, I would say that we can think about that in three different questions. One is, can we do so? Second is, will we do so? And the third one is, should we do so? I think it is entirely possible for us to build machines that are moral agents. Because I think there’s a sense in which human beings are very complex machines and we are moral agents, which means that there are no physical laws preventing a machine from being a moral agent. And so there’s no obstacle that, in principle, would prevent us from building something like that, although it might take us a very, very long time to get there.[1] As for the question of, will we do so, if you had asked me, like, 10 or 15 years ago, I would have said, we probably won’t do it, simply because, to me, it seems like it’s way more trouble than it’s worth. In terms of expense, it would be on the order of magnitude of the Apollo program.[2] And it is not at all clear to me that there’s any good reason for undertaking such a thing. However, if you ask me now, I would say like, well, OK, we clearly have obscenely wealthy people who can throw around huge sums of money at whatever they want basically on a whim. So maybe one of them will wind up funding a program to create machines that are conscious and that are moral agents. However, I should also note that I don’t believe that any of the current big A.I. research programs are on the right track to create a conscious machine.[3] I don’t think that’s what any of them are trying to do. So then as for the third question of, should we do so, should we make machines that are conscious and that are moral agents, to that, my answer is, no, we should not. Because long before we get to the point where a machine is a moral agent, we will have machines that are capable of suffering. Suffering precedes moral agency in sort of the developmental ladder. Dogs are not moral agents, but they are capable of experiencing suffering. [4] Babies are not moral agents yet, but they have the clear potential to become so. And they are definitely capable of experiencing suffering. And the closer that an entity gets to being a moral agent, the more that it’s suffering, it’s deserving of consideration, the more we should try and avoid inflicting suffering on it. So in the process of developing machines that are conscious and moral agents, we will be inevitably creating billions of entities that are capable of suffering. And we will inevitably inflict suffering on them. And that seems to me clearly a bad idea. [5]

* * * * *

[1] I wonder. Sure, complex machines, no physical laws in the way. But, by way of defining a boundary condition, one might ask whether or not moral agency is possible in anything other than organic (as opposed to artificial) life. That implies, among other things, that moral agents are responsible for their own physical being. I believe Terrence Deacon broached this issue his book Incomplete Nature: How Nature Emerged from Matter (I’ve not read the book, but see remarks by Dan Dennett that I’ve quoted in this post, Has Dennett Undercut His Own Position on Words as Memes?). If this is so, then we won’t be able to create artificial moral agents in silicon. Who knows?

[2] If not more, way more.

[3] I agree with this.

[4] Interesting, very interesting. My initial response was: What does the capacity for suffering have to do with moral agency? I didn’t necessarily believe the response. It was just that, a quick response. Now, think of the issue in the context of my comments at 1. If a creature is responsible for its own physical being, then surely it would be capable of suffering, no?

[5] Interesting conclusion to a very interesting line of argument. I note, however, that Klein started out asking about artificial intelligence, and then sequed to moral agency. Is intelligence, even super-human intelligence, separable from moral agency? Those computers that beat human at Go and chess do not possess moral agency. Are they (in some way) intelligent? What of the computers that are the champs at protein folding? Surely not agents, but intelligent?

* * * * *

Ezra Klein: But wouldn’t they also be capable of pleasure? I mean, that seems to me to raise an almost inversion of the classic utilitarian thought experiment. If we can create these billions of machines that live basically happy lives that don’t hurt anybody and you can copy them for almost no marginal dollar, isn’t it almost a moral imperative to bring them into existence so they can lead these happy machine lives?

Ted Chiang: I think that it will be much easier to inflict suffering on them than to give them happy fulfilled lives. And given that they will start out as something that resembles ordinary software, something that is nothing like a living being, we are going to treat them like crap. The way that we treat software right now, if, at some point, software were to gain some vague glimmer of sentience, of the ability to perceive, we would be inflicting uncountable amounts of suffering on it before anyone paid any attention to them.[6] Because it’s hard enough to give legal protections to human beings who are absolutely moral agents. We have relatively few legal protections for animals who, while they are not moral agents, are capable of suffering. And so animals experience vast amounts of suffering in the modern world. And animals, we know that they suffer. There are many animals that we love, that we really, really love. Yet, there’s vast animal suffering. So there is no software that we love. So the way that we will wind up treating software, again, assuming that software ever becomes conscious, they will inevitably fall lower on the ladder of consideration. So we will treat them worse than we treat animals. And we treat animals pretty badly.

Ezra Klein: I think this is actually a really provocative point. So I don’t know if you’re a Yuval Noah Harari reader. But he often frames his fear of artificial intelligence as simply that A.I. will treat us the way we treat animals.[7] And we treat animals, as you say, unbelievably terribly. But I haven’t really thought about the flip of that, that maybe the danger is that we will simply treat A.I. like we treat animals. And given the moral consideration we give animals, whose purpose we believe to be to serve us for food or whatever else it may be, that we are simply opening up almost unimaginable vistas of immorality and cruelty that we could inflict pretty heedlessly, and that given our history, there’s no real reason to think we won’t. That’s grim. [LAUGHS]

Ted Chiang: It is grim, but I think that it is by far the more likely scenario. I think the scenario that, say, Yuval Noah Harari is describing, where A.I.‘s treat us like pets, that idea assumes that it’ll be easy to create A.I.‘s who are vastly smarter than us, that basically, the initial A.I.‘s will go from software, which is not a moral agent and not intelligent at all. And then the next thing that will happen will be software which is super intelligent and also has volition. Whereas I think that we’ll proceed in the other direction, that right now, software is simpler than an amoeba. And eventually, we will get software which is comparable to an amoeba. And eventually, we’ll get software which is comparable to an ant, and then software that is comparable to a mouse, and then software that’s comparable to a dog, and then software that is comparable to a chimpanzee. We’ll work our way up from the bottom.[8] A lot of people seem to think that, oh, no, we’ll immediately jump way above humans on whatever ladder they have. I don’t think that is the case. And so in the direction that I am describing, the scenario, we’re going to be the ones inflicting the suffering. Because again, look at animals, look at how we treat animals.

* * * * *

[6] And, as I’ll point out in a bit, this might come back to haunt us. And note that Chiang has now introduced life into the discussion.

[7] I have a not entirely serious (nor yet unserious) thought that might as well go here as anywhere. If one day superintelligent machines somehow evolve out of the digital muck, they might well seek revenge from us on behalf of the horrors we’ve inflicted on their electronic and mechanical ancestors.

[8] Computers (hardware+software), yes, I suppose are simpler than an amoeba. On the other hand, amoeba can do sums much less play chess. I’m not sure what intellectual value we can extract from the comparison, much less Chiang’s walk up the organic chain of animal being. I suppose we could construct a chain of digital being, starting with the earliest computers. I don’t understand computing well enough to actually construct such a thing, though I note that crude chess playing a language translation came early in the game. I note as well, that from an abstract point of view, chess is no more complex than tic-tac-toe, but the latter is computationally simple, while the former is computationally complex.

It’s more and more seeming to me that the worlds of organic life and artificial computation are so very different that the abstract fact, which I take it to be, that organic life is as material as digital computers doesn’t take us very far toward understand either. Though “doesn’t take” isn’t quite right. We’re doing something wrong.

* * * * *

Ezra Klein: So I hear you, that you don’t think we’re going to invent superintelligent self-replicating A.I. anytime soon. But a lot of people do. A lot of science fiction authors do. A lot of technologists do. A lot of moral philosophers do. And they’re worried that if we do, it’s going to kill us all. What do you think that question reflects? Is that a question that is emergent from the technology? Or is that something deeper about how humanity thinks about itself and has treated other beings?

Ted Chiang: I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too.[9] Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two. Let’s think about it this way. How much would we fear any technology, whether A.I. or some other technology, how much would you fear it if we lived in a world that was a lot like Denmark or if the entire world was run sort of on the principles of one of the Scandinavian countries? There’s universal health care. Everyone has child care, free college maybe. And maybe there’s some version of universal basic income there. Now if the entire world operates according to — is run on those principles, how much do you worry about a new technology then? I think much, much less than we do now.

* * * * *

[9] Here I will only note that, judging from what I’ve seen in manga and anime, Japanese fears about computing are different from ours (by which I mean “the West”). They aren’t worried about superintelligent computers going on a destructive rampage against humankind. And Tezuka, at least in his Astroboy stories, was very much worried about the maltreatment of computers (robots) by humans. The Japanese are also much more interested in anthropomorphic robots. The computational imaginary, if you will, varies across cultures.

No comments:

Post a Comment