Brooks just made a post specifically directed at the type surrounding transformers. First a bit of historical perspective:
A few such instances of AI technologies that have induced gross overestimates of how soon we would get to AGI, in roughly chronological order, that I personally remember include:
John McCarthy’s estimate that the computers of the 1960’s were powerful enough to support AGI, Minsky and Michie and Nilsson each believing that search algorithms were the key to intelligence, neural networks (volume 3, perceptrons) [[I wasn’t around for the first two volumes; McCulloch and Pitts in 1943, Minsky in 1953]], first order logic, resolution theorem proving, MacHack (chess 1), fuzzy logic, STRIPS, knowledge-based systems (and revolutionizing medicine), neural networks (volume 4, back propagation), the primal sketch, self driving cars (Dickmanns, 1987), reinforcement learning (rounds 2 and 3), SOAR, qualitative reasoning, support vector machines, self driving cars (Kanade et al, 1997), Deep Blue (chess 2), self driving cars (Thrun, 2007), Bayesian inference, Watson (Jeopardy, and revolutionizing medicine), neural networks (volume 5, deep learning), Alpha GO, reinforcement learning (round 4), generative images, and now large language models. All have heralded the imminence of human level intelligence in machines. All were hyped up to the limit, but mostly in the days when very few people were even aware of AI, so very few people remember the levels of hype. I’m old. I do remember all these, but have probably forgotten quite a few…
None of these things have lived up to that early hype. As Amara predicted at first they were overrated. But at the same time, almost every one of these things have had long lasting impact on our world, just not in the particular form that people first imagined. As we twirled them around and prodded them, and experimented with them, and failed, and retried, we remade them in ways different from how they were first imagined, and they ended up having bigger longer term impacts, but in ways not first considered.
How does this apply to GPT world?
Then a caveat:
Back in 2010 Tim O’Reilly tweeted out “If you’re not paying for the product then you’re the product being sold.”, in reference to things like search engines and apps on telephones.
I think that GPTs will give rise to a new aphorism (where the last word might vary over an array of synonymous variations):
If you are interacting with the output of a GPT system and didn’t explicitly decide to use a GPT then you’re the product being hoodwinked.
I am not saying everything about GPTs is bad. I am saying that, especially given the explicit warnings from Open AI, that you need to be aware that you are using an unreliable system.
He goes on to say:
When no person is in the loop to filter, tweak, or manage the flow of information GPTs will be completely bad. That will be good for people who want to manipulate others without having revealed that the vast amount of persuasive evidence they are seeing has all been made up by a GPT. It will be bad for the people being manipulated.
And it will be bad if you try to connect a robot to GPT. GPTs have no understanding of the words they use, no way to connect those words, those symbols, to the real world. A robot needs to be connected to the real world and its commands need to be coherent with the real world. Classically it is known as the “symbol grounding problem”. GPT+robot is only ungrounded symbols. [...]
My argument here is that GPTs might be useful, and well enough boxed, when there is an active person in the loop, but dangerous when the person in the loop doesn’t know they are supposed to be in the loop. [This will be the case for all young children.] Their intelligence, applied with strong intellect, is a key component of making any GPT be successful.
At last, his specific predictions:
Here I make some predictions for things that will happen with GPT types of systems, and sometimes coupled with stable diffusion image generation. These predictions cover the time between now and 2030. Some of them are about direct uses of GPTs and some are about the second and third order effects they will drive.
- After years of Wikipedia being derided as not a referable authority, and not being allowed to be used as a source in serious work, it will become the standard rock solid authority on just about everything. This is because it has built a human powered approach to verifying factual knowledge in a world of high frequency human generated noise.
- Any GPT-based application that can be relied upon will have to be super-boxed in, and so the power of its “creativity” will be severely limited.
- GPT-based applications that are used for creativity will continue to have horrible edge cases that sometimes rear their ugly heads when least expected, and furthermore, the things that they create will often arguably be stealing the artistic output of unacknowledged humans.
- There will be no viable robotics applications that harness the serious power of GPTs in any meaningful way.
- It is going to be easier to build from scratch software stacks that look a lot like existing software stacks.
- There will be much confusion about whether code infringes on copyright, and so there will be a growth in companies that are used to certify that no unlicensed code appears in software builds.
- There will be surprising things built with GPTs, both good and bad, that no-one has yet talked about, or even conceived.
- There will be incredible amounts of misinformation deliberately created in campaigns for all sorts of arenas from political to criminal, and reliance on expertise will become more discredited, since the noise will drown out any signal at all.
- There will be new categories of pornography.
No comments:
Post a Comment