Pages in this blog

Saturday, April 16, 2022

Have we reached a tipping point? Which one? [AI, GPT-3]

Steven Johnson has an interesting article in The New York Times Magazine: A.I. Is Mastering Language. Should We Trust What It Says? (April 15, 2022). He starts by showing us a guessing game, guess the missing ____. He talks about the technology – GPT-3 and AlphaFold, and so forth – discusses its implications, and arrives at the meeting that gave birth to OpenAI:

OpenAI’s origins date to July 2015, when a small group of tech-world luminaries gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the symbolic heart of Silicon Valley. The dinner took place amid two recent developments in the technology world, one positive and one more troubling. On the one hand, radical advances in computational power — and some new breakthroughs in the design of neural nets — had created a palpable sense of excitement in the field of machine learning; there was a sense that the long “A.I. winter,” the decades in which the field failed to live up to its early hype, was finally beginning to thaw. A group at the University of Toronto had trained a program called AlexNet to identify classes of objects in photographs (dogs, castles, tractors, tables) with a level of accuracy far higher than any neural net had previously achieved. Google quickly swooped in to hire the AlexNet creators, while simultaneously acquiring DeepMind and starting an initiative of its own called Google Brain. The mainstream adoption of intelligent assistants like Siri and Alexa demonstrated that even scripted agents could be breakout consumer hits.

But during that same stretch of time, a seismic shift in public attitudes toward Big Tech was underway, with once-popular companies like Google or Facebook being criticized for their near-monopoly powers, their amplifying of conspiracy theories and their inexorable siphoning of our attention toward algorithmic feeds. Long-term fears about the dangers of artificial intelligence were appearing in op-ed pages and on the TED stage. Nick Bostrom of Oxford University published his book “Superintelligence,” introducing a range of scenarios whereby advanced A.I. might deviate from humanity’s interests with potentially disastrous consequences. In late 2014, Stephen Hawking announced to the BBC that “the development of full artificial intelligence could spell the end of the human race.” It seemed as if the cycle of corporate consolidation that characterized the social media age was already happening with A.I., only this time around, the algorithms might not just sow polarization or sell our attention to the highest bidder — they might end up destroying humanity itself. And once again, all the evidence suggested that this power was going to be controlled by a few Silicon Valley megacorporations.

The agenda for the dinner on Sand Hill Road that July night was nothing if not ambitious: figuring out the best way to steer A.I. research toward the most positive outcome possible, avoiding both the short-term negative consequences that bedeviled the Web 2.0 era and the long-term existential threats. From that dinner, a new idea began to take shape — one that would soon become a full-time obsession for Sam Altman of Y Combinator and Greg Brockman, who recently had left Stripe. Interestingly, the idea was not so much technological as it was organizational: If A.I. was going to be unleashed on the world in a safe and beneficial way, it was going to require innovation on the level of governance and incentives and stakeholder involvement. The technical path to what the field calls artificial general intelligence, or A.G.I., was not yet clear to the group. But the troubling forecasts from Bostrom and Hawking convinced them that the achievement of humanlike intelligence by A.I.s would consolidate an astonishing amount of power, and moral burden, in whoever eventually managed to invent and control them.

In December 2015, the group announced the formation of a new entity called OpenAI.

He goes on to tell us how OpenAI was conceived as a nonprofit but the prodigious costs of the ‘compute’ required to build state of the art AI forced it to create a for-profit partner, OpenAI L.P. That’s very interesting. Johnson returns to the technology itself, talks of the “stochastic parrot” critique by Emily Bender and Timnit Gebru, checks in with uber-skeptic Gary Marcus, and so forth and so on. After a while he gets around to this:

So how do you widen the pool of stakeholders with a technology this significant? Perhaps the cost of computation will continue to fall, and building a system competitive to GPT-3 will become within the realm of possibility for true open-source movements, like the ones that built many of the internet’s basic protocols. (A decentralized group of programmers known as EleutherAI recently released an open source L.L.M. called GPT-NeoX, though it is not nearly as powerful as GPT-3.) Gary Marcus has argued for “a coordinated, multidisciplinary, multinational effort” modeled after the European high-energy physics lab CERN, which has successfully developed billion-dollar science projects like the Large Hadron Collider. “Without such coordinated global action,” Marcus wrote to me in an email, “I think that A.I. may be destined to remain narrow, disjoint and superficial; with it, A.I. might finally fulfill its promise.”

Yes, and there’s more of that. But really, I want to get around to something else. If you’ve been thinking about these issues, by all means, read the article.

I do believe that GPT-3 and its kin represent a potential phase shift in technology, something I talked about at some length in a working paper I posted two years ago, GPT-3: Waterloo or Rubicon? Here be Dragons, Version 2. But what does that mean, phase change? That’s when some folks start talking about the so-called Tech Singularity, which I’ve also written about, Redefining the Coming Singularity – It’s not what you think, Version 2. I’m now going to say a bit more.

It’s common to think of such things as exhibiting an “S curve,” like this:

That’s my view, but it’s not how the Singularitarians think. They think the rise is almost vertical and that it’s driven by the technology itself. Somewhere down there to the left some computer or computers become self-aware and proceeded to make themselves smarter and smarter and smarter and  – 

FOOM!

That’s nonsense.

WE are driving the rise. As we learn more about the technology, and more about human and, indeed, animal intelligence, we are able to develop and deploy more sophisticated intelligence. The singularity is in our minds, our collective culture.

Now, let’s go back to that meeting that was held in July 2015. Where do we place it on that curve? Do we place it here?

Or maybe a bit further up, perhaps here:

Given that AI got its start back in the 1950s so might argue that we much nearer the shoulder of the curve.

I’m skeptical. I think that prior work has been ‘absorbed’ in the horizontal to the left. [These are only suggestive diagrams, visual metaphors.] But we don’t really know.

It’s not at all obvious that we’ll be able to climb that curve. I believe that the technology Johnson discusses represents an advance over the technology we had, say, five years ago; but I do not think it will take us up the curve. I think we’ve got a lot more to learn, a lot. Do we have the wisdom and will to learn it?

* * * * *

Emily Bender replies to Steven Johnson, On NYT Magazine: Resist the Urge to be Impressed.

No comments:

Post a Comment