Monday, April 10, 2023

How’s the AI apocalypse coming along?

On November 30, 2022, OpenAI released ChatGPT to the world. Five days later a million users had signed up. Within the last month or so Microsoft, allied with OpenAI, and Google declared commercial war in the search-engine space, a NYTimes reporter was freaked out by a bonkers session with Bing/Sydney, and assorted other outrages were registered in the public’s mind. On Wednesday March 29 the nonprofit Future of Life Institute released an open letter urging a 6-month moratorium on the development of “advanced” AI systems. It has been signed by thousands, including some of the most important people in A.I. and technology. At the same time Eliezer Yudkowsky published a letter in Time Magazine saying that that wasn’t enough, that it was time to shut down research on A.I. Yudkowsky is known for his belief that sufficiently advance A.I. will most likely destroy humankind.

How’s this working out?

There are a lot of balls in the air on this one. No one knows where things are going.

At the moment I’m thinking about the proverbial “man on the street,” the “ordinary person,” whoever, whatever they are. Someone with no expertise in any of the relevant disciplines, someone who’s seen some science fiction movies or TV where a computer goes off the reservation, someone who’s just living their life, dealing with whatever, job, kids, woke, abortion, whatever. Now ChatGPT comes out and they play with it a bit, or their kids do, a friend, and aunt, someone. They play with this thing and it’s a lot more convincing that Siri or Alexa. What do they make of it?

Where do they turn for insight? Insight is on offer from many sources, online and elsewhere. Where do people turn? What kind of conversations do they have with one another. What experts do they listen to? How do they identify expertise? Obviously, this is going to vary widely among individuals.

Even among the experts. Just who is an expert, anyhow? Even the people who build these things don’t know what they can do, don’t know how to control them. So just what is the value of that expertise?

What have the readers of Time Magazine made of Yudkowsky’s arguments? The fact that Time gave him space certifies him as some kind of expert, no? Here’s how they identified him:

Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.

I don’t read Time so I have no idea what their overall coverage of AI has been like, but I assume Yudkowsky’s is not the only voice the magazine has presented. If they’d given space to someone like Yan LeCun, who has quite a different view from Yudkowsky, how would people deal with that? They could identify him as a Vice President for AI at Meta, as a faculty member of NYU, and as a recipient of the Turing Award.

How would people weight Yudkowsky’s credentials against those of someone like LeCun? How would they take such credential into account when evaluating their respective arguments? Keep in mind that expertise is not so highly valued as it once was.

What a mess. But how else do you overhaul culture from top to bottom?

No comments:

Post a Comment