Friday, May 7, 2021

Analyze This! Screaming on the flat part of the roller coaster ride [Does GPT-3 get the joke?]

Here’s Seinfeld’s first television appearance. It is from 1977 on Celebrity Cabaret, a nationally syndicated show. He’s doing a bit that starts with the Roosevelt Island tramway. You know what that is?

Background knowledge and common sense

Or don’t you? Just to be sure Seinfeld helpfully explains what a tramway is. What it is, really, is something he uses to set up the joke, but that’s not what I’m interested in. I’m interested in background knowledge, often known as common sense knowledge in the rarified world of artificial intelligence (AI).

If you look at the version of this bit that Seinfeld published in his book, Is This Anything?, you’ll see that he doesn’t explain it at all (I’ve placed it immediately below). He just assumes – Bam! – you know it is. He also assumes you know that the South Bronx is a rather sketchy neighborhood – back in the day it was said that “the Bronx is burning.” Because sometimes it was. But if you didn’t already know that you wouldn’t get the joke.

But Seinfeld doesn’t say “the South Bronx” in the version in the clip. He simply refers to “the ghetto.” That tells you want you need to know to get the joke. Though I’ve not consulted him on this, I assume he did that because he figured that most people in a national audience would not know about the sad state of the South Bronx. New Yorkers would know that; it’s background knowledge for them – unless of course they’re actually in the South Bronx, in which case it’s in their face. But others are not likely to know that.

So that’s what interests me, the background knowledge, the common sense knowledge, that holds the bit together. You also have to know that roller coasters go up and down (did you notice the gesture he made during the bit?), that they’re a little scary on the downslope, that bankruptcy isn’t consistent with amusement park rides, that cities have governments and that it’s those governments that do things, etc. We know all this stuff without thinking about it.

But computers do not. So we’re going to quiz a computer about the punch line.

The Bit: Roosevelt Island Tramway

I see they just finished the Roosevelt Island Tramway.

That’s nice…

The city’s going bankrupt,

they’re putting up rides for us.

Next thing you know, there’ll be a roller coaster through the South Bronx.

That would be the first roller coaster where the people scream on the flat part of the ride.

Analysis and Commentary from GPT-3

GPT-3 is a massive AI engine that OpenAI revealed in June 2020. Massive? It’s got 175 billion parameters. Parameters? Don’t worry about it. Think of a parameter as a control knob on your TV – do TV’s have knobs anymore? Imagine a TV with 175 billion, with a “B”, knobs. That’s a lot of twiddling. It was trained on 499 billion tokens (think, roughly, words) of text sucked in from the internet. So it has processed text about Roosevelt Island, tramways, roller coasters, the South Bronx and a lot of other things, such as Antarctica, green cheese, Mata Hari, redwoods, giant Komodo dragons, fried rice, the Saturn V rocket, Jerry Seinfeld, pencils, The Tale of Genji, whales, catapults, looms, fruit flies, and anything else laying around on the Internet.

What GPT-3 does is to produce text. You feed it a bit of text and it takes it a runs with it by producing a continuation of the text. The text it produces is often astonishingly natural. It seems like a human wrote it. But no human did, just a dumb, but huge, just ginormous, machine.

If you feed GPT-3 some text and then ask a question, it will answer the question. That’s what I did with GPT-3. Well, not me, my friend Phil Mohun is the one who quizzed the machine. He gave it Seinfeld’s bit, and then asked a question. GPT-3 answered. Since I was particularly interested in the last line, the punch line, that’s what Phil asked about.

1] Here’s the first interaction. I’ve set Phil’s words in boldface.

Jerry Seinfeld tells the following joke:

I see they just finished the Roosevelt Island Tramway.
That’s nice…
The city’s going bankrupt,
they’re putting up rides for us.
Next thing you know, there’ll be a roller coaster through the South Bronx.
That would be the first roller coaster where the people scream on the flat part of the ride.

Q: Why would the people scream on the flat part of the ride?
GPT-3: Because it’s the Bronx.

Well, yeah. So what? I wanted to see if GPT-3 understood the relationship between being scared on a roller coaster and being scared on the ground in the South Bronx. So Phil tried again.

2] He increases the “temperature.” Don’t worry about just what that is, but increasing the temperature raises the amount of randomness in GPT-3’s response. Note that on this try Phil also gave GPT-3 the whole bit, as in example 1. That’s true of the other examples as well, but I’m omitting that part because it’s redundant. Just assume GPT-3 got the whole deal.

Q: Why would the people scream on the flat part of the ride?
GPT-3: There’ll be nothing to see for miles but rubble.

We’re getting somewhere. It seems like GPT-3 “knows” something about the South Bronx. But what does rubble have to do with screaming?

3] Here’s another response:

Q: Why would the people scream on the flat part of the ride?
GPT-3: So the people in Queens can see them.

That’s very interesting, and indicates some knowledge of geography as Queens and the South Bronx adjoin one another. But the roller coaster would have to be pretty close to Queens in order for the screams to reach people there. How far does GPT-3 think screams can be heard, under what wind conditions, and so forth. It’s not clear what’s going on inside GPT-3 – why would people on the roller coaster want to be noticed by people in Queens? – but the reply is not flat-out nonsense.

4] Let’s give GPT-3 another try.

Q: Why would the people scream on the flat part of the ride?
GPT-3: The ride is the South Bronx.
Q: What about the South Bronx would cause people to scream?
GPT-3: Look at a map. It’s all twists and turns—like a roller coaster!

Now we’ve got a two-turn interaction with GPT-3. Phil follows GPT-3’s first answer with another question.

Did GPT-3 actually look at a map? I think not. The only thing it looked at during training was words. Still, I don’t see where there wouldn’t have been some text somewhere in the database that gave it some idea about the geography. Not, mind you, that I’m at all clear about the geography myself; I’ve only been there once and that was in and out, near the Grand Concourse. Still, I score it a good response.

5] Let’s go for another response to the same prompt:

Q: Why would the people scream on the flat part of the ride?
GPT-3: The ride is the South Bronx.
Q: What about the South Bronx would cause people to scream?
GPT-3: You’d have to be there.

A variant of the joke that interacts with the South Bronx's reputation for high crime rates and arson.

In 1982, the South Bronx was experiencing a crisis of arson, fires, and abandonment. By the end of that year, the city would have paid out $10 million to local contractors for repairs and maintenance.

Whoa! I love this. GPT-3 really dug down and found out something about the South Bronx, something directly relevant to the bit.

You might be wondering, however, why GPT-3 gave such a different response to #5 than #4 when the prompts are the same. GPT-3 is not deterministic. That is to say, when given the same input more than once it doesn’t necessarily give the same output. Not that it’s got a mind – at least I’m not going to credit it with one, and I don’t see why anyone would. But it’s complicated. Frankly, we don’t really know what it’s doing.

Let’s take a couple more spins on the merry-go-round.

6] Notice that Phil’s second question is explicitly about screaming:

Q: Why would the people scream on the flat part of the ride?
GPT-3: The ride is the South Bronx.
Q: Why do people scream on roller coaster rides?
GPT-3: Because that’s the only time of the ride they realize there are other options.

Whoa! That’s hilarious. I wonder if Seinfeld could somehow work it into a bit. But where did it come from?

7]

Q: Why would the people scream on the flat part of the ride?
GPT-3: The ride is the South Bronx.
Q: Why do people scream on roller coaster rides?
GPT-3: Because they’re having fun!

8]

Q: Why would the people scream on the flat part of the ride?
GPT-3: The ride is the South Bronx.
Q: Why do people scream on roller coaster rides?
GPT-3: Because it’s scary.

These last two are very interesting. Both make sense and are, I believe, true. And yet they are somewhat at odds with one another, no? Roller coaster rides are both fun and scary. In fact they are fun because they are scary, scary but not dangerous. Does fun always have to be based on something scary? I think not. Are scary things always fun? Certainly not. I wonder how much of that GPT-3 understands? For that matter, how much about that do we humans understand? Not as much as we’d like.

And with that I decided we’d questioned GPT-3 enough. I’m pretty sure such a Q and A could have gone on for awhile. I don’t know where it would have gone but I’m pretty sure GPT-3 would not have figured out the tricky relationship between scariness and fun.

The larger point, though, is simply that, despite all the text it went through during training, GPT-3 can’t quite put it together to tell us why people would be screaming on the flat part of a roller coaster ride in the South Bronx. It was getting pretty close there at the end, but that took a bit of prompting.

* * * * *

For extended remarks on GPT-3, see my working paper, GPT-3: Waterloo or Rubicon? Here be Dragons, Working Paper, Version 2, Working Paper, August 20, 2020, 34 pp., https://www.academia.edu/43787279/GPT_3_Waterloo_or_Rubicon_Here_be_Dragons_Version_2.

* * * * *

From Jerry Seinfeld, Is This Anything? Simon & Schuster, 2020.

No comments:

Post a Comment