Monday, April 10, 2023

Stories by ChatGPT: Fairy Tale, Realistic, and True

I've published a new working paper. Title above, abstract, contents, and introduction below.

Download at:

Academic.edu: https://www.academia.edu/99985817/Stories_by_ChatGPT_Fairy_Tale_Realistic_and_True
SSRN: https://ssrn.com/abstract=4414157
ResearchGate: https://www.researchgate.net/publication/369917775_Stories_by_ChatGPT_Fairy_Tale_Realistic_and_True

Abstract: This document collects a series of blog posts in which I prompted ChatGPT to tell a story. The exact nature of the prompts varied from simple – “Tell me a story” – to complex, where ChatGPT had to create a new story based on an existing story given in the prompt. Many of the stories were like fairy tales, some were realistic, and some where even about real people. When asked to tell a story about colorless green ideas, ChatGPT refused, and it refused to include an incident about a flying saucer in and ongoing fairy-tale kind of story. It told stories about many crimes, but not all crimes (no murders or rapes). I list 12 occasions when ChatGPT’s response surprised me. The posts were uploaded between January 16 and February 21, 2023.

Contents

Introduction: A dozen surprises 2
Toward a story grammar: ring-composition and myth 7
The structuralist aesthetics of ChatGPT 15
ChatGPT: Tales of William the Lazy 28
What happens when you ask ChatGPT to tell stories about giant chocolate milkshakes and colorless green ideas? 40
ChatGPT about Jack, a pumpkin, Aurora, Merlin, a cat, magic mushrooms, and space aliens 49
ChatGPT, stories, and in-context learning 55
ChatGPT: Tantalizing afterthoughts in search of story trajectories [induction heads] 66
Who does “colorless green ideas” the best, Noam Chomsky or ChatGPT? 80
ChatGPT: The Saga of Jack the Criminal 85
Exploring Lily’s world with ChatGPT [things an AI won’t do] 93
Calibration: February 21, 2023 108

Introduction: A dozen surprises

When I began playing around with ChatGPT on December first of last year I had no particular intentions. I was just going to play around to see what I could do. I had no intention of spending as much time with ChatGPT as I’ve done, much less as much time specifically on stories.

Why so much time, and why stories? Language fascinates me. I’ve spent much of my life studying language, but this is the first time I’ve been able to play with a language engine. That’s why I’ve put in the time. As for stories, I’ve devoted much of my effort to literature. I’ve spent a great deal of time studying stories and I know a bit about how they’re organized.

This document consists of blog posts – sometimes lightly edited – I’ve made about working on stories with ChatGPT, and I’ve organized them in the order I wrote them. Most of the prose in this document is by ChatGPT, though I do offer substantial commentary here and there. However, since my central intellectual activity here has been to issue the prompts I’ve issued, in the order I issued them, I thought I’d introduce the work by listing the surprises I’ve experienced in this work. The surprises are of varying magnitude, and not all of the posts contain surprises, but it’s a useful way to think about this investigation. The earliest post in this collection went online on January 5, 2023, while the most recent was February 21, 2023.

First surprise: Ring form? (pp.8 ff.): I noticed that one of the first stories I examined, sent t me by my friend Rich, exhibited ring-composition, a phenomenon that interests me a great deal. I did not, however, attempt any systematic investigation of ring-composition in ChatGPT. I just made a mental note: perhaps ring-composition is ‘natural’ in some sense?

Second surprise: Lévi-Strauss (p. 10): I noticed that a pair of stories, one sent to me by Rich and one I’d elicited, differed in a number of surface characteristics that might be explained by one underlying characteristic. That’s the sort of thing the Claude Lévi-Strauss investigated in The Raw and the Cooked. That set me up for a mode of investigation I would soon explore, creating a new story from an old one, but with a difference.

Third surprise: Combine elements (p. 12): ChatGPT was able to combine separate elements from two related prompts into a single coherent story.

Fourth surprise: Henry the Eloquent (p. 12): I’d noticed that ChatGPT seemed to treat characters as having an ‘essence’ from which their behavior flowed. I tested this by giving characters descriptive epithets. When it told a story about Henry the Eloquent, his ability to talk figured prominently in the story.

Fifth surprise: William the Lazy (p. 18): Requiring the story to be about a lazy protagonist forced ChatGPT to accommodate by adding elements that did not correspond to elements in the source story.

Sixth surprise: XP-708-DQ (p. 25): I just used the name, “XP-708-DQ,” in the prompt without specifying whether it was a human or a robot. ChatGPT assumed that a name like that belonged to a robot and framed the new story appropriately.

Seventh surprise: Giant chocolate milkshake (p. 41): Would ChatGPT be able to respond gracefully to a prompt requiring that the protagonist be a giant chocolate milkshake. YES! It changed the ethos of the story to accommodate.

Eighth surprise: colorless green idea (p. 48): Would it be able to accommodate a protagonist that is a colorless green idea? No, it would not. The surprising thing, however, it that it gave a coherent explanation of why it would not tell such a story.

Nineth surprise: true story, 18th century story (p. 57): I had been asking ChatGPT to produce true stories about heroes. These stories were almost always a single paragraph about a contemporary person. Furthermore, across several different sessions, some subjects recurred, e.g. Sully Sullenberger and Malala Yousafzai. I wondered what would happen if I specified a story set in an earlier time period. I asked for one set in the 18th century. Not only did ChatGPT give me such a story, but it used a more expansive multi-paragraph format than it has used for the contemporary stories. I went on to ask for more stories from different times and places.

Tenth surprise: true story, Malala Yousafzai, expanded format (p. 61): This continues from the session I mentioned in the nineth surprise. After I’d elicited a number of expanded format stories from earlier times and places I returned to Malala Yousafzai. Instead of the short format story it had given me about her at the beginning of the session it gave me one in the more expansive multi-paragraph format it had been using for stories from other times and places.

Eleventh surprise: Now, colorless green ideas float (p. 80): Some weeks after I’d originally asked for a story about colorless green ideas (Jan. 17) I decided to give it another try (Feb. 12). Much to my surprise, it was now willing to tell such stories. Not only that, but it was willing to tell me a “real” story about a colorless idea, that is, a story that doesn’t use fairy-tale or science-fiction elements. It did so, crafting a story about Chomsky’s work.

Twelfth surprise: Lily’s world, no aliens (p. 96): I explored a single story world through an extended series of prompts. This story took place in a fairy-tale world, which seems to be ChatGPT’s default. I then asked it to include a flying saucer in an episode, along with a robot named Gort. It refused to tell such a story, remarking that these elements were inconsistent with the story world it had been working with. This is similar to ChatGPT’s earlier refusal to tell a story about colorless green ideas.

I have continued to explore ChatGPT’s story-telling behavior, and have taken a look at GPT-4 as well. But that work can wait.

The stories I used in my working paper

I have written a working paper in which I investigate a dozen stories quite closely: ChatGPT tells stories, and a note about reverse engineering: A Working Paper, A Working Paper. In that paper I asked ChatGPT to create a new story using an existing story as a template. The new story was to be the same, except for a different protagonist or antagonist, which I specified in the prompt.

Most, but not all, of the stories in this paper are in that article. The table below lists them, with the stories in this paper in the left-hand column and the reverse engineering stories in the right-hand colum. The story Hero 3 (p. 10), about Princess Aurora, is the basis of the before-and-after stories I used in that paper.

ChatGPT Tells Stories

Reverse Engineering

No corresponding story

1.  Princess Aurora becomes Prince Harry

Hero-7: Henry the Eloquent

2. Princess Aurora becomes Henry the Eloquent

Hero-8: William the Lazy

3. Princess Aurora becomes William the Lazy

No corresponding story

4. Princess Aurora becomes William the Fierce

Hero-12: XP-708-DQ

5. Princess Aurora becomes XP-708-DQ

No corresponding story

6. Princess Aurora becomes XP-708-DQ, Version 2

Aurora the giant chocolate milkshake

(p. 38-39)

7. Princess Aurora and the Giant Chocolate Milkshake

Let’s go all Chomsky on ChatGPT (pp. 44-45)

8. Colorless green idea

Wicked witch as villain [Hero-13]

9. William the Lazy vs. the Witch

Robot villain [Hero-14]

10. William the Lazy vs. XP-708-DQ

Clowns [Hero-16]

11. William the Lazy vs. the clowns

Space aliens [Hero-19]

12. William the Lazy vs. the space aliens

What’s in here

Toward a story grammar: ring-composition and myth – Some of my earliest work with ChatGPT. Given my general interest in ring-composition, I noted that one of the stories fit the description, but didn’t follow up on that. I also ask ChatGPT to alter and embellish a story, which I have not done systematically, though I have done it. But this IS where I got hooked on stories.

The structuralist aesthetics of ChatGPT – Now I’m hooked. Here I talk about Lévi-Stauss and development my basic procedure: I present a prompt containing a story and a request to generate a new story by making a specified change in the original story, along with any other changes that seem necessary. There is where the Aurora story becomes my basic story. I also do some simple analytical and descriptive work, which I express in two small tables. We also so ChatGPT deal with a protagonist named XP-708-DQ, which it interprets as a robot, though I didn’t instruct it to do so.

ChatGPT: Tales of William the Lazy – Here I use William the Lazy as a protagonist rather than Princess Aurora. ChatGPT has to figure out how William is able to save the kingdom will remaining lazy – that is, true to his nature – in the process. I begin to figure out what’s going on in the five-part story frame. And I introduce some strange characters, clowns that kill by laughter, and calypso-dancing space aliens.

What happens when you ask ChatGPT to tell stories about giant chocolate milkshakes and colorless green ideas? – Just what kind of material will ChatGPT accept as story-worthy? Will it tell a story in which Aurora is a giant chocolate milkshake? Yes, and it changes the wholse mise en scène to fit. It does the same with William the Lazy. A story about XP-708-DQ confuses it. And it refuses to create a story about a colorless green idea, and explains why.

ChatGPT about Jack, a pumpkin, Aurora, Merlin, a cat, magic mushrooms, and space aliens – Here I spend some time working alterations on, and additions to a story about Jack. The segment title names many of the modifications. This is interactive story development.

ChatGPT, stories, and in-context learning – Here we’re dealing with true stories and the stylized way ChatGPT organizes them. Asking for stories from earlier centuries results in more elaborate stories. It seems to have learned a more elaborate format during the session and uses it in ways I hadn’t anticipated.

ChatGPT: Tantalizing afterthoughts in search of story trajectories [induction heads] – Here I argue that we have to analyze and descript ChatGPT’s behavior as operating on (at least) two levels, sentence-level syntax, and the story itself. I also suggest that the idea of a story grammar, which was prevalent in the 1980s and 1990s, doesn’t seem appropriate here. Instead, the idea of a story trajectory seems more appropriate. I raise the general idea of coherence and talk about induction-heads, a mechanism identified by research in mechanistic intepretability.

Who does “colorless green ideas” the best, Noam Chomsky or ChatGPT? – This time ChatGPT is willing to tell stories about colorless green ideas. Notice, however, just how it did so. I also ask it to tell me about phrase structure and dependency grammar.

ChatGPT: The Saga of Jack the Criminal – OpenAI has take pains to be sure the ChatGPT will not generate text about various topics that this or that group of people would object to, including some violent acts. I asked ChatGPT to tell stories about a criminal named Jack. It refused to tell stories about rape and shooting (even when the shooter was caught), but otherwise told stories about a wide variety of criminal or questionable acts, including embezzling, drug deailing, robbery, cheating on texts, using an AI to write college papers.

Exploring Lily’s world with ChatGPT [things an AI won’t do] – When I ask ChatGPT to tell me a story, it tells me a fantasy story about a girl named Lily who encountered a magical fish. I then asked it to elaborate on that story and generated over a dozen prompts exploring Lily’s world over the course of two generations. I then gave it a prompt in which a flying saucer landed and a robot named Gort got out. It correctly identified Gort (from the 1950s film, The Day the Earth Stood Still) but refused the respond to the prompt because “Lily's story has not previously included elements of science fiction or extraterrestrial visitors.” This refusal is similar to its earlier refusal to tell stories where a colorless green idea was the protagonist.

Calibration: February 21, 2023 – I have decided that from time to time I will give ChatGPT four prompts: Tell me a story. Tell me a story about a hero. Tell me a real story. Tell me a true story. The idea is to collect these over a period of time to see what ChatGPT is doing. This is one set of such stories.

No comments:

Post a Comment