David Marchese, An A.I. Pioneer on What We Should Really Fear, NYTimes, December 21, 2022.
Common sense
Can you explain what “common sense” means in the context of teaching it to A.I.? A way of describing it is that common sense is the dark matter of intelligence. Normal matter is what we see, what we can interact with. We thought for a long time that that’s what was there in the physical world — and just that. It turns out that’s only 5 percent of the universe. Ninety-five percent is dark matter and dark energy, but it’s invisible and not directly measurable. We know it exists, because if it doesn’t, then the normal matter doesn’t make sense. So we know it’s there, and we know there’s a lot of it. We’re coming to that realization with common sense. It’s the unspoken, implicit knowledge that you and I have. It’s so obvious that we often don’t talk about it. For example, how many eyes does a horse have? Two. We don’t talk about it, but everyone knows it. We don’t know the exact fraction of knowledge that you and I have that we didn’t talk about — but still know — but my speculation is that there’s a lot. Let me give you another example: You and I know birds can fly, and we know penguins generally cannot. So A.I. researchers thought, we can code this up: Birds usually fly, except for penguins. But in fact, exceptions are the challenge for common-sense rules. Newborn baby birds cannot fly, birds covered in oil cannot fly, birds who are injured cannot fly, birds in a cage cannot fly. The point being, exceptions are not exceptional, and you and I can think of them even though nobody told us. It’s a fascinating capability, and it’s not so easy for A.I.
Value Pluralism
So what’s most exciting to you right now about your work in A.I.? I’m excited about value pluralism, the fact that value is not singular. Another way to put it is that there’s no universal truth. A lot of people feel uncomfortable about this. As scientists, we’re trained to be very precise and strive for one truth. Now I’m thinking, well, there’s no universal truth — can birds fly or not? Or social and cultural norms: Is it OK to leave a closet door open? Some tidy person might think, always close it. I’m not tidy, so I might keep it open. But if the closet is temperature-controlled for some reason, then I will keep it closed; if the closet is in someone else’s house, I’ll probably behave. These rules basically cannot be written down as universal truths, because when applied in your context versus in my context, that truth will have to be bent. Moral rules: There must be some moral truth, you know? Don’t kill people, for example. But what if it’s a mercy killing? Then what? [...]
Is the ultimate hope that A.I. could someday make ethical decisions that might be sort of neutral or even contrary to its designers’ potentially unethical goals — like an A.I. designed for use by social media companies that could decide not to exploit children’s privacy? Or is there just always going to be some person or private interest on the back end tipping the ethical-value scale? The former is what we wish to aspire to achieve. The latter is what actually inevitably happens. In fact, Delphi is left-leaning in this regard because many of the crowd workers who do annotation for us are a little bit left-leaning. Both the left and right can be unhappy about this, because for people on the left Delphi is not left enough, and for people on the right it’s potentially not inclusive enough. But Delphi was just a first shot. There’s a lot of work to be done, and I believe that if we can somehow solve value pluralism for A.I., that would be really exciting. To have A.I. values not be one systematic thing but rather something that has multidimensions just like a group of humans. [...]
Could it be that if humans are in situations where we’re relying on A.I. to make moral decisions then we’ve already screwed up? Isn’t morality something we probably shouldn’t be outsourcing in the first place? You’re touching on a common — sorry to be blunt — misunderstanding that people seem to have about the Delphi model we made. It’s a Q. and A. model. We made it clear, we thought, that this is not for people to take moral advice from. This is more of a first step to test what A.I. can or cannot do. My primary motivation was that A.I. does need to learn moral decision-making in order to be able to interact with humans in a safer and more respectful way.
Take that, Nick Bostrom!
Like the Nick Bostrom paper clip example, which I know is maybe alarmist. But is an example like that concerning? No, but that’s why I am working on research like Delphi and social norms, because it is a concern if you deploy stupid A.I. to optimize for one thing. That’s more of a human error than an A.I. error. But that’s why human norms and values become important as background knowledge for A.I. Some people naïvely think if we teach A.I. “Don’t kill people while maximizing paper-clip production,” that will take care of it. But the machine might then kill all the plants. That’s why it also needs common sense. It’s common sense not to kill all the plants in order to preserve human lives; it’s common sense not to go with extreme, degenerative solutions.
There’s more in the interview.
No comments:
Post a Comment