Friday, September 25, 2020

Gwern on the implications of GPT-3 ["no coherent model of why GPT-3 was possible"]

I'm not a regular follower of Gwern, though I did check out what he has to say about GPT-3 and poetry, so I only just now noticed this statement:
...GPT-3’s scaling curves, unpredicted meta-learning, and success on various anti-AI challenges suggests that in terms of futurology, AI researchers’ forecasts are an emperor sans garments: they have no coherent model of how AI progress happens or why GPT-3 was possible or what specific achievements should cause alarm, where intelligence comes from, and do not learn from any falsified predictions. Their primary concerns appear to be supporting the status quo, placating public concern, and remaining respectable. As such, their comments on AI risk are meaningless: they would make the same public statements if the scaling hypothesis were true or not.
While Gwern appears to believe in AI in a way that I do not, I agree with this. And that is what prompted my recent thinking on GPT-3 in the first place, in particular, my working papers, GPT-3: Waterloo or Rubicon? Here be Dragons, from August 5, and the more recent, What economic growth and statistical semantics tell us about the structure of the world, from August 24.

Gwern concludes that assessment with this question:  "Depending on what investments are made into scaling DL, and how fast compute grows, the 2020s should be quite interesting—sigmoid or singularity?" I do expect the 2020s to be interesting, but I don't expect sigmoidal from GPT-X and similar engines, and not singularity from anything. Though, as I've been arguing for awhile, we're already swimming in a singularity.

No comments:

Post a Comment