I’ve been listening to a podcast by Ezra Klein with guests Kevin Roose and Casey Newton. Here’s one bit that I think is important.
Ezra Klein
Yeah, I first agree that clearly A.I. safety was not behind whatever disagreements Altman and the board had. I heard that from both sides of this. And I didn’t believe it, and I didn’t believe it, and I finally was convinced of it. I was like, you guys had to have had some disagreement here? It seems so fundamental.
But this is what I mean the governance is going worse. All the OpenAI people thought it was very important, and Sam Altman himself talked about its importance all the time, that they had this nonprofit board connected to this nonfinancial mission. The values of building A.I. that served humanity, that could fire Sam Altman at any time or even shut down the company fundamentally if they thought it was going awry in some way or another. And the moment that board tried to do that — now, I think they did not try to do that on very strong grounds — but the moment they tried to do that, it turned out they couldn’t. That the company could fundamentally reconstitute itself at Microsoft or that the board itself couldn’t withstand the pressure coming back. [...]
So maybe they have a stronger board that is better able to stand up to Altman. That is one argument I have heard.
On the other hand, those stronger board members do not hold the views on A.I. safety that the board members who left, like Helen Toner of Georgetown and Tasha McCauley from Rand, held. I mean, these are people who are going to be very interested in whether or not OpenAI is making money. I’m not saying they don’t care about other things too, but these are people who know how to run companies. [...] I mean, am I getting that story wrong to you?
Kevin Roose
No, I think that’s right. And it speaks to one of the most interesting and strangest things about this whole industry is that the people who started these companies were weird. And I say that with no normative judgment. But they made very weird decisions.
They thought A.I. was exciting and amazing. They wanted to build A.G.I. But they were also terrified of it, to the point that they developed these elaborate safeguards. I mean, in OpenAI’s case, they put this nonprofit board in charge of the for-profit subsidiary and gave, essentially, the nonprofit board the power to push a button and shut down the whole thing if they wanted to.
At Anthropic, one of these other A.I. companies, they are structured as a public benefit corporation. And they have their own version of a nonprofit board that is capable of essentially pushing the big red shut it all down button if things get too crazy. This is not how Silicon Valley typically structures itself.
Mark Zuckerberg was not in his Harvard dorm room building Facebook thinking if this thing becomes the most powerful communication platform in the history of technology, I will need to put in place these checks and balances to keep myself from becoming too powerful. But that was the kind of thing that the people who started OpenAI and Anthropic were thinking about.
And so I think what we’re seeing is that that kind of structure is bowing to the requirements of shareholder capitalism which says that if you do need all this money to run these companies, to train these models, you are going to have to make some concessions to the powers of the shareholder and of the money. And so I think that one of the big pieces of fallout from this OpenAI drama is just that OpenAI is going to be structured and run much more like a traditional tech company than this kind of holdover from this nonprofit board.
Casey Newton
And that is just a sad story. I truly wish that it had not worked out that way. I think one of the reasons why these companies were built in this way was because it just helped them attract better talent. I think that so many people working in A.I. are idealistic and civic-minded and do not want to create harmful things. And they’re also really optimistic about the power that good technology has. And so when those people say that as powerful and good as these things could be, it could also be really dangerous, I take them really seriously. And I want them to be empowered. I want them to be on company boards. And those folks have just lost so much ground over the past couple of weeks. And it is a truly tragic development, I think, in the development of this industry.
No comments:
Post a Comment