It’s hard to tell, but it sure is...shall we say...interesting.
Back in the summer of 2020 when GPT-3 was unveiled I wrote a working paper, GPT-3: Waterloo or Rubicon? Here be Dragons. My objective was to convince myself that the underlying technology wasn’t just some weird statistical fluke, that there was in fact something going on of substantial interest and value. To my mind, I succeeded in that. But I was skeptical as well.
Here's what I put on the first page of that working paper, even before the abstract:
GPT-3 is a significant achievement.
But I fear the community that has created it may, like other communities have done before – machine translation in the mid-1960s, symbolic computing in the mid-1980s, triumphantly walk over the edge of a cliff and find itself standing proudly in mid-air.
This is not necessary and certainly not inevitable.
A great deal has been written about GPTs and transformers more generally, both in the technical literature and in commentary of various levels of sophistication. I have read only a small portion of this. But nothing I have read indicates any interest in the nature of language or mind. Interest seems relegated to the GPT engine itself. And yet the product of that engine, a language model, is opaque. I believe that, if we are to move to a level of accomplishment beyond what has been exhibited to date, we must understand what that engine is doing so that we may gain control over it. We must think about the nature of language and of the mind.
I didn’t expect that anyone with any influence in these matters would pay any attention to me – though one can always hope – but that’s no reason not to write.
That was 2020 and GPT-3. Two years later ChatGPT was launched to great acclaim, and justly so. I certainly spent a great deal of time playing with, investigating it, and writing about it. But I didn’t forget my cautionary remarks from 2020.
Now we’re hearing rumblings that things aren’t working out so well. Back on August 12 the ever skeptical Gary Marcus posted, What if Generative AI turned out to be a Dud? Some possible economic and geopolitical implications. His first two paragraphs:
With the possible exception of the quick to rise and quick to fall alleged room-temperature superconductor LK-99, few things I have ever seen have been more hyped than generative AI. Valuations for many companies are in the billions, coverage in the news is literally constant; it’s all anyone can talk about from Silicon Valley to Washington DC to Geneva.
But, to begin with, the revenue isn’t there yet, and might never come. The valuations anticipate trillion dollar markets, but the actual current revenues from generative AI are rumored to be in the hundreds of millions. Those revenues genuinely could grow by 1000x, but that’s mighty speculative. We shouldn’t simply assume it.
And his last:
If hallucinations aren’t fixable, generative AI probably isn’t going to make a trillion dollars a year. And if it probably isn’t going to make a trillion dollars a year, it probably isn’t going to have the impact people seem to be expecting. And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is.
FWIW, I believe, and have been saying time and again, that hallucinations seem to me to be inherent in the technology. They aren’t fixable.
Now, yesterday, Ted Gioia, a culture critic with an interest in technology and experience in business, has posted, Ugly Numbers from Microsoft and ChatGPT Reveal that AI Demand is Already Shrinking. Where Marcus has a professional interest in AI technology and has intellectual skin the tech game, Gioia is just a sophisticated and interested observer. Near the end of his post, after many links to unfavorable stories, Gioia observes:
... we can see that the real tech story of 2023 is NOT how AI made everything great. Instead this will be remembered as the year when huge corporations unleashed a half-baked and dangerous technology on a skeptical public—and consumers pushed back.
Here’s what we now know about AI:
- Consumer demand is low, and already appears to be shrinking.
- Skepticism and suspicion are pervasive among the public.
- Even the companies using AI typically try to hide that fact—because they’re aware of the backlash.
- The areas where AI has been implemented make clear how poorly it performs.
- AI potentially creates a situation where millions of people can be fired and replaced with bots—so a few people at the top continue to promote it despite all these warning signs.
- But even these true believers now face huge legal, regulatory, and attitudinal obstacles
- In the meantime, cheaters and criminals are taking full advantage of AI as a tool of deception.
Marcus has just updated his earlier post with a followup: The Rise and Fall of ChatGPT?
Stay tuned.
No comments:
Post a Comment