Wednesday, June 17, 2020

GPT-3 apocalypse now [?]



This is followed by a long tweet stream in which Anders outlines the possible consequences.
(1) Politicians and celebrities already have speechwriters and ghostwriters. For them, the key point is whether they endorse some text, not whether they generated it themselves.

So, auto-text does not substantially impact them. It’s just another way to outsource the writing.

(2) A divide between verified writers and “other writers” develops. Verified writers get a black check mark (black for ink). Others are assumed to possibly just be auto-generated text.

(Of course, this depends on how impressive the auto-text becomes. See the next point…)

(3) People develop a sense for automatically generated writing, like they have for clickbait. You’ll be reading and then realize that it's actually auto-text.

This will lead to an arms race. My prediction: People who try to learn to distinguish them will be able to.

(4) While auto-generated text replaces writing in some cases, in more cases it just makes writing easier. People automatically generate text to get an initial draft and then manually edit it into its final form. Using auto-text as an aid to writing becomes common.

(5) In some cases, auto-text may even help highly skilled writers. Maybe when stuck, writers will use auto-text to produce suggestions they *hate so much* that they have to fix it.

This like to chess grandmasters, who play at their peak when using a chess program as an aid.

(6) Whether as a supplement or replacement for writing, many skilled-but-not-that-skilled writers lose their jobs. In many cases, they are simply replaced by auto-text.
In many cases, one writer + auto-text will replace three writers without auto-text.

(7) The loss of jobs for mid-skilled writers continues to increase the pressure on academia and the humanities, which are already failing to provide their graduates with jobs.

The trend of automation continues. (Automation: real, slow, mostly hitting skilled workers.)

(8) One industry that is hit hard is journalism. 80% of articles could be replaced by auto-text plus manual editing. Maybe even 95%.

Journalism is already having trouble paying for itself; cf the slow slide into clickbait. This will come as a welcome way to lower costs.

(9) As AI shreds journalism, journalists try to turn against it. A lot of digital ink is spilled explaining why real journalism by real writers matters. Some is written by auto-text.

This is like the slide into clickbait. They loudly lament, and it happens anyway.

(10) Discourse among the cognoscenti improves. Why? There is now a new bar that must be passed to say or think interesting things — you have to say things that are recognizably not auto-generated.

"So-and-so sounds like auto-text" becomes an insult.

(11) Tons of people decide more definitively to not engage in serious thinking on their own.

This is much like chess, where tons of people were discouraged from the serious study of chess after the chess programs got good enough.

(12) Scammers use auto-text to automate scam conversations. This enables some scammers to notably scale their efforts.

This yields some initial successes, but then is met by people working on auto-generated conversation detection, identity verification, and the like.

(13) A form of trolling develops where people use auto-text to refute, confound, or make people look stupid, after which point it is revealed that the troll has just been using auto-text.

Some counter-scammers use this on scammers. We all laugh.

(14) The rate of gaffes in public apologies goes down, because it becomes common practice to auto-generate an apology first before trying to write one yourself.

This then leads people who want apologies to be sincere to try to figure out when apologies are auto-generated.

(15) Teenagers who still engage in dating sometimes send breakup letters clearly written by auto-text. A new variety of cruelty in breakups.

(This is a variation on something I saw in high school: kids sending breakup letters back after editing them for spelling and grammar.)

(16) Before automatically generated text becomes too good, sketch comedy groups make fun of it by having regular conversations with it.

Eventually the text is too good for this to be funny anymore.

(17) Netflix tries developing movie scripts using auto-text.

(I think Netflix is already doing formula-generated movies and shows with less overarching coherence; cf the movie Bright (2017). My impression is that these are hated by the critics but surprisingly popular.)

(18) It becomes much, much easier to cheat on homework assignments. This leads to the development of methods for tracking the origin and development of pieces of writing. These are used by the school system to prevent cheating.

(19) More generally, it becomes important in many cases to verify identities and the origin of text. This leads to the development of technology to do this. These technologies make it harder to be anonymous or pseudonymous generally.

So there are some privacy implications.

(20) With regard to AI safety, auto-text will completely convince people that an advanced AI could hire assassins, convince people to let it out of the box, and so on. This will increase people's credence in AI danger scenarios.

(cont) ...but on the other hand, auto-text will confuse people about the threat from AI overall. Many people will assume that because AI can generate text at a 12th grade reading level, it is already intelligent. So why hasn’t it undergone takeoff then?

What did I miss?
Hmmm.... In suspension? Tongue in cheek? Only the Shadow knows.

No comments:

Post a Comment