Friday, February 16, 2024

Sam Altman’s Big, Bad Idea

Nonzero Newsletter

Robert Wright
Feb 16, 2024

Nonzero has several topics this week, but NVIDIA (Jensen Huang) and OpenAI (Sam Altman) share the lead article. Here's the full text of that article.

* * * * *

This week Jensen Huang, co-founder and CEO of microchip maker NVIDIA, said that all nations should build their own high-powered large language models. That way, he said, they will have “sovereign AI.”

That way they will also make him even richer than he is. Training a big LLM takes tens of thousands of microchips that cost tens of thousands of dollars each. And NVIDIA, which now has the third highest market valuation in the world (behind Microsoft and Apple), dominates the AI chipmaking business.

At least for now. According to the Wall Street Journal, OpenAI CEO Sam Altman is seeking investors in “a wildly ambitious tech initiative that would boost the world’s chip-building capacity” and also boost its production of energy. (Huge amounts of power go into the training and mass use of LLMs.) One source told the Journal that Altman is trying to round up between five and seven trillion dollars—more than five percent of the world’s GDP.

So the king of AI hardware and the king of AI software agree: Planet Earth needs to devote more resources to AI than it’s already devoting.

But does it? Is accelerating the evolution of AI in the interest of the 7.9 billion people who aren’t Jensen Huang and aren’t Sam Altman?

AI can bring lots of wonderful things—cheaper, better medical care, leaps in economic productivity, new forms of creative expression—even, for some people, new and welcome forms of companionship. But those things tend to have a flip side; they’re ‘disruptive’ in both the good and bad senses of the term.

“Leaps in economic productivity,” for example, is often another name for “people losing their jobs.” Hence this headline in Monday’s Wall Street Journal: “AI Is Starting to Threaten White-Collar Jobs. Few Industries Are Immune.” Subhead: “Leaders say the fast-evolving technology means many jobs might never return.” And even if as many jobs are created as disappear, the transition will be wrenching for many workers and life-shattering for some.

So too with AIs-as-companions (which is already a thing): Yes, as with social media, we’ll eventually learn what the downside is, and presumably we’ll figure out how to handle that (even if that mission still isn’t accomplished in the case of social media). But meanwhile there will be some psychological carnage. AI companies, like social media companies, will naturally “optimize for engagement”—and we’ve seen how suboptimal that is.

And, of course, there’s the problem of AIs that, in the hands of bad actors, wreak havoc. This week researchers at the University of Illinois published a paper reporting that “LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback.” I don’t know what that means, but, given that the italics were in the original text, I’m pretty sure it’s not good.

With bad-actor AI, as with more legitimate AI that has bad side effects, we can eventually get things under control. In principle. But, as a practical matter, if lots of different AI disruptions hit us fast, non-catastrophic transition could be hard.

Oddly, Sam Altman seems to agree with much of this analysis. In October, during an on-stage interview, he noted that, even if the age of AI brings more and better jobs to replace the old jobs, many of the people who lose the old ones will suffer. He even said, “The thing I think we do need to confront as a society is the speed at which this is going to happen.”

But note that word “confront.” Altman isn’t proposing that, faced with dangerously disruptive speed, we try to slow things down. He’s not even proposing that we not aggressively speed things up. In fact, he seems to think that, in some ways, speeding things up will solve the problem of things moving too fast. In that on-stage interview, he elaborated on how we can “confront” the speed of social transformation:

“One of the reasons that we feel so strongly about deploying this tech as we do [is that]… by putting this out in people’s hands and making this super widely available and getting billions of people to use ChatGPT, not only do people have the opportunity to think about what’s coming and participate in that conversation, but people use the tool to push the future forward.”

In short: It’s all good! But isn’t that what Silicon Valley told us last time around? Right before social media helped polarize our politics and spawn pathological subcultures and make adolescence even more stressfully weird than it used to be?

The good news is that some people are thinking seriously about the challenge of governing AI. This week saw the release of a paper called “Computing Power and the Governance of Artificial Intelligence” (whose authors include AI eminence Yoshua Bengio and also—credit where due—someone who works at OpenAI). The paper’s main point is that computing power, aka “compute”—which means, roughly speaking, the high-end chips NVIDIA makes and Altman wants to start making—is a key, even the key, policy lever when it comes to governing AI.

The paper is policy-agnostic; it’s not recommending anything in particular. It just explains how such things as the compute-intensive and energy-intensive process of training big LLMs, and the trackable supply chains involved in producing high-end chips, make the future development and deployment of AI amenable to various kinds of transparency and governance. The specifics are largely left to the reader’s imagination.

So let’s dream! Suppose the world’s governments got together and decided to slightly slow the evolution of AI. They might, for example, put a steep tax on advanced microchips—and put the revenue to related uses, like studying the AI “alignment” problem or steering some of AI’s brainpower toward solving problems faced by poorer nations, problems market forces alone wouldn’t address.

A global tax on advanced microchips would probably annoy both Jensen Huang and Sam Altman, but that’s not the biggest problem. The biggest problem is the very idea of getting the world’s governments together to talk seriously about an innovative policy. It’s hard enough to get governments together to talk about ending the wars they keep getting into!

This is humankind’s current problem, and possibly its fatal problem: Our political evolution hasn’t reached the level that the current level of our technological evolution demands. I’m pretty sure the solution to this problem isn’t the acceleration of technological evolution. —RW

No comments:

Post a Comment