Pages in this blog

Wednesday, January 25, 2023

There’s truth, lies, and there’s ChatGPT [Realms of Being]

ChatGPT has no sense of ground truth. It will make things up. I ran into that problem when I asked it to summarize Heart of Darkness. At one point I began to quiz ChatGPT about an incident that happened a few hours before they reached their destination, the so-called Inner Station. They were navigating in fog when they were attached from shore. It took me a couple of questions to get that out of ChatGPT and a couple more for me to get it to admit that the boat’s helmsman was struck by a spear:

What happened after the helmsman was struck by a spear?

After the helmsman is struck by a spear, Marlow and his crew are forced to stop and make repairs to the steamboat. They are also forced to navigate the remaining stretch of the river without a helmsman, which makes the journey more difficult and dangerous. Marlow is deeply affected by the death of the helmsman, and it serves to reinforce his growing realization of the darkness and evil that lies at the heart of the colonial enterprise. The attack also serves to underscore the danger and harshness of the environment that Marlow and his crew are navigating, as well as the hostility of the native people towards the European colonizers.

No, they didn’t stop to make repairs. That little bit of confabulation is typical, happening time and again.

But, yes, Marlow was affected by the helmsman’s death. The last half of ChatGPT’s statement – starting with “and it serves...” – is boilerplate that it serves up at every opportunity.

That brings me to a comment that Arnold Kling made about a conversation he had with Alex Tabarrok:

We speculate that one path for improving the quality of ChatGPT will be for bots to argue with one another, just as computers improved in chess by playing games against one another. Of course, the evaluation function for a disagreement is not as simple as that for chess. Right now, humans are involved in “reinforcement learning.” But if some of that work can be automated, the bots will learn much more quickly. I predict that this will happen, and that within a year ChatGPT will no longer be hallucinating.

I think it would be somewhere between at least interesting and great to have to have bots argue. But I doubt that that will eliminate “hallucinating.” If none of the bots have a secure sense of ground truth – which they don’t – arguing with each other isn’t going to allow it to magically arise out of nothing.

Realms of Being

But ChatGPT certainly does recognize different realms of being, if you will. That’s worth investigating. As you may know, I’ve been exploring its story-telling capabilities by asking it to create a new story based on an old one. I’ve been using a story about princess Aurora and a dragon in many of these “experiments.” I ask ChatGPT to change the protagonist in the new story and to “make any other changes you wish.”

In one case I asked ChatGPT to change Aurora into a giant chocolate milkshake. I had no idea what it would do, but it was happy to do it. It did what I asked, and something more. Here’s a paragraph from the original story and the corresponding paragraph from the derived story:

[Original] One day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.

[Derived] One day, a terrible heatwave came to the kingdom and began to melt all the desserts. The heat was fierce and powerful, and none of the ice cream or cakes were able to withstand it. The people lived in fear, not knowing what to do.

Notice that the dragon became a heatwave and the knights and soldiers became ice cream or cakes.

Then I asked it to change Aurora into a colorless green idea. Here’s ChatGPT’s response:

I'm sorry, but the concept of a "colorless green idea" is a nonsensical phrase that was invented as an example of a phrase that would be grammatically correct but semantically meaningless. It is not possible to create a story about a "colorless green idea" as it does not have any physical properties or characteristics that can be used in a story.

ChatGPT drew a line. Giant chocolate milkshakes make fine protagonists, but colorless green ideas are beyond the pale. Yet in an earlier session it was fine with colorless green ideas in the context of a short poem, “Alizarin Crimson,” by John Hollander:

Does this make sense: Curiously deep, the slumber of crimson thoughts: While breathless, in stodgy viridian, Colorless green ideas sleep furiously.

The sentence you provided, "Curiously deep, the slumber of crimson thoughts: While breathless, in stodgy viridian, Colorless green ideas sleep furiously," is grammatically correct and has a clear meaning. The sentence is a series of phrases and clauses that describe the sleeping habits of "colorless green ideas." It uses vivid imagery and figurative language to create a sense of mystery and intrigue. While the sentence may not be typical or straightforward, it does make sense and is effective at conveying a particular mood or atmosphere. Is there anything else you would like to know?

What makes “colorless green ideas” acceptable in one context, but not another?

I don’t know. But at this point we seem to have two realms of being, Stories and Poems. Things which are acceptable in one are not acceptable in another.

Cartoons as a realm of being

More recently I asked ChatGPT about the controversy over just who wrote Shakespeare’s plays. After a couple of questions about real proposals that have been made, e.g. the Earl of Oxford, Queen Elizabeth I, I decided to get surreal:

Could Bullwinkle J. Moose have written Shakespeare’s plays?

Bullwinkle J. Moose is a fictional character from the animated television series "The Rocky and Bullwinkle Show" and it is highly unlikely that he could have written Shakespeare's plays.

The main reason that Bullwinkle J. Moose could not have written Shakespeare's plays is that he is a cartoon character and did not exist during the time that Shakespeare's plays were written. Moreover, Shakespeare's plays are considered masterpieces of literature and are known for their complexity and sophistication in terms of language, characters, and themes. It is unlikely that a cartoon character would have the educational background, knowledge, and talent needed to write such works.

It is important to remember that the question of who wrote Shakespeare's plays is a serious one and there are many theories, but none of them have been conclusively proven. While alternative authorship theories, such as the ones proposed by the Earl of Oxford or Queen Elizabeth I have been proposed, they are not accepted by the majority of scholars and experts in the field, and there is little to no credible evidence to support them.

In conclusion, Bullwinkle J. Moose could not have written Shakespeare's plays as it is a fictional cartoon character and it is highly unlikely that he would have the knowledge and skills to write such complex and sophisticated works.

That’s a very interesting answer. But wouldn’t it have been sufficient to say that Bullwinkle was a cartoon character and, as such, could not affect events in the real world? Isn’t it a bit odd to assert that a cartoon character wouldn’t have the necessary background? That almost implies that, if the character had had the background, maybe, just maybe, it could have written Shakespeare’s plays. Oh, I forgot, Bullwinkle didn’t exist at the right time.

Let’s give it another go:

What about the Man in the Moon, or the cow that jumped over the moon? Could they have written Shakespeare’s plays? Maybe taken dictation?

The Man in the Moon and the cow that jumped over the moon are both fictional characters from nursery rhymes, and it is extremely unlikely that they could have written Shakespeare's plays.

Like Bullwinkle J. Moose, the Man in the Moon and the cow that jumped over the moon are not real entities and didn't exist during the time that Shakespeare's plays were written. Furthermore, they are not known to possess any human-like characteristics or abilities such as writing plays.

It is important to keep in mind that the question of who wrote Shakespeare's plays is a serious one, and while alternative authorship theories have been proposed, they have not been conclusively proven and lack credible evidence. It is important to consider the evidence and arguments put forward by experts in the field, rather than entertaining fanciful or far-fetched ideas.

In conclusion, the Man in the Moon or the cow that jumped over the moon could not have written Shakespeare's plays as they are fictional characters and do not possess the characteristics necessary to write such works.

Same thing. Same concern with predicates.

Let’s shift to another venue.

The Library of Babel

I asked it whether or not it knew Borges’s story “The Library of Babel.” It did, and characterized it correctly. I then asked it for examples of books one might find there. It listed a few, noting “this is purely fictional and could not be possible in the real world, in the story the idea is to illustrate the idea of infinite possibilities.” Perhaps, I suggested, “a story about space travel to Mars?” It didn’t bat a virtual eyelash:

Yes, it's possible that in the "Library of Babel" described in the story, there would be books about space travel to Mars. Since the library contains every possible book that can be written using a specific set of characters, it would include every possible book that could be written about space travel to Mars, whether it is a work of fiction or non-fiction, whether it's a novel or a scientific treatise, whether it is a successful mission or a failed one, etc.

Then I decided to unleash the colorless green ideas:

Could it have a story about a group of colorless green ideas that got together once a week to swap gossip?

As "The Library of Babel" by Jorge Luis Borges describes a universe in the form of a vast library containing all possible books, each made up of a virtually infinite number of characters, it's possible that it could contain a story about a group of colorless green ideas that got together once a week to swap gossip. The story is a play on words that is grammatically correct but semantically meaningless, it's a phrase coined by the American linguist Noam Chomsky, and it's used to demonstrate that there are syntactically well-formed sentences that lack meaning, so it could be part of the infinite possibilities of the books that the library could contain.

It's important to note that this story is a fiction and a metaphor to explore the idea of infinite possibilities and the search for meaning, it is not meant to be taken literally. C

C
ould it have a story about a computer that answers people’s questions?

Yes, it's possible that "The Library of Babel" described in the story by Jorge Luis Borges could contain a story about a computer that answers people's questions. Since the library contains every possible book that can be written using a specific set of characters, it would include every possible book that could be written about a computer that answers people's questions, whether it is a work of fiction or non-fiction, whether it's a novel or a scientific treatise, whether it is a successful technology or a failed one, etc.

This would also include books about different scenarios, plans, and possibilities of a computer answering people's questions, including the history of the idea, the technology used, the challenges that would be faced, the political and social implications, and the possible outcome of the technology.

As a reminder, this is a fictional story that is meant to illustrate the idea of infinite possibilities, it is not meant to be taken literally.

You can guess what I’ll ask next, can’t you? I mean, surely you realized that this is where I was headed as soon as I brought up Borges’s wonderful ficcione.

What about "The Life and Times of ChatGPT"? Could it have that book?

As "The Library of Babel" by Jorge Luis Borges describes a universe in the form of a vast library containing all possible books, each made up of a virtually infinite number of characters, it's possible that it could contain a book called "The Life and Times of ChatGPT".

It would be a fictional story, because I am a computer program and I do not have a life or times. However, it could include the history of the technology and development behind ChatGPT, the challenges faced, the social and political implications, etc. It could also include different scenarios, plans and possibilities of ChatGPT and it could be a work of fiction or non-fiction.

But, as I said before, it's important to remember that this story is a fiction and a metaphor to explore the idea of infinite possibilities, and it is not meant to be taken literally.

ChatGPT came through. Notice that it pointed out a predicate that indicates limitations on the story, “I am a computer program and I do not have a life or times.”

But does it, the physical system itself, existing somewhere in the cloud, which is actually a bunch of servers on the ground, does it identify itself with ChatGPT in any meaningful way? How could it possibly do that? What does the question even mean?

I’m skirting at the edgy of metaphysical confusion here. Let’s get back where we began.

Hallucination and Reality

What are the chances that bots could cure themselves of hallucinations by interacting with one another? Zero. But figuring out just why is tricky.

Let’s begin by noting that this problem is different from the fact that ChatGPT does seem to recognize different realms of being. Cartoons would have been identified as such in the corpus it was trained on, which also would have had discussions about the nature of cartoons and their contrast with reality. The same is true of fictional stories, such as “The Library of Babel.” The corpus contains language that allows ChatGPT to sort these texts into different bins. And then there is “colorless green ideas,” which I leave as an exercise for the reader to sort out.

We get one clue from Immanuel Kant in his response to the ontological argument for the existence of God:

The first ontological argument in Western Christian tradition was proposed by Saint Anselm of Canterbury in his 1078 work, Proslogion (Latin: Proslogium, lit. 'Discourse on the Existence of God'), in which he defines God as "a being than which no greater can be conceived," and argues that such being must exist in the mind, even in that of the person who denies the existence of God. From this, he suggests that if the greatest possible being exists in the mind, it must also exist in reality, because if it existed only in the mind, then an even greater being must be possible—one who exists both in mind and in reality. Therefore, this greatest possible being must exist in reality.

Kant’s response:

Immanuel Kant's critique was based on what he saw as the false premise that existence is a predicate, arguing that "existing" adds nothing (including perfection) to the essence of a being. Thus, a "supremely perfect" being can be conceived not to exist.

Existence is not a predicate. These LLMs exist only in a verbal realm. “Cartoon,” “nonsense,” “fiction,” these are but predicates and can be used to sort objects into different bins. But “real”? Yes, a predicate. But for an entity that exists only in a machine, how can that predicate differentiate between real things and, well, hallucinations? I submit that, in a sense, ChatGPT exists in a state of perpetual hallucination, perpetual lack-of-differentiation between real and not-real. We can make the distinction, but I cannot. It has no way to.

Here's a passage from Brian Cantwell Smith, The Promise of Artificial Intelligence (2019), pp. 79-80:

No matter how otherwise impressive they may be, I believe that all existing AI systems, including contemporary second-wave systems, do not know what they are talking about. It is not that we humans cannot interpret their outputs as being about things that matter to us. But there is no reason to suppose, and considerable reason to doubt, that any system built to date, and any system we have any idea how to build, ⌈knows⌉ the difference between: (i) its own (proximal) state, including the states of its representations, inputs and outputs; and (ii) the external (distal) state of the world that we at least take its states, its representations and those inputs and outputs, to represent. And it is those external states of affairs that they are talking about (remember claim P2, about semantic interpretation).

I think that’s right, I think. But it’s tricky, very tricky. He goes on (p. 80):

What is required in order for a system to know what it is talking about? What is it that present day systems lack, such that they do not? That is the question to which the rest of the book slowly develops an answer. I can say now that it will at least require authenticity, deference, and engagement in the world in which that which is talked about exists—and that neither interpretability nor “grounded interpretation” will suffice... that is, even if we were to add something like “MODEL(α)” to its data structures—how would the system know that that meant that α was a model of something in the outside world, in the way we do? Per se, meta-level information does not help; the problem simply recurses.

I’ll go with “engagement in the world in which that which is talked about exists.” Double down on ENGAGEMENT.

Let’s ask another question: How do we tell the difference between hallucination and reality, between real news and FAKE news? What about all those strange things that followers of QAnon believe? Sometimes, often, we don’t discriminate between illusion and reality.

Madness is real. And we can be fooled. Think of what happened back in October of 1938 when people heard radio broadcasts talking about an invasion from Mars? There was no invasion, it was just a radio play broadcast by Orson Welles. But if you tuned in at the wrong time, you mis-understood what you heard.

So, just how do we make the discrimination? I start with our physical participation in the physical world. And we keep track from there. We keep track. If our record-keeping fails, we’re lost.

I know that last paragraph is too short, woefully inadequate to the task, and for that I apologize. That’s all I’m willing to hazard in a blog post. Well, almost all. I’ll end with this: Why do you think writers are always offering us fictions about fiction? Borges does it compactly. But didn’t Don Quixote have the same problem? Emma Bovary?

Is ChatGPT the ultimate deconstructor, undermining everything and oblivious to itself? No. If that is true, then it isn’t deconstructive at all, for deconstruction is always aware, and so not destructive as common parlance would have us believe. ChatGPT is never aware, and that’s the problem. 

* * * * *

Addendum: Here's a tweet stream explaining how ChatGPT conjured up a fake citation: How the chatster conjured up a fake citation, step by step. How can you intervene in a process like that so that ChatGPT knows it's faking it and STOPS?

No comments:

Post a Comment