Pages in this blog

Wednesday, July 7, 2021

Let’s think of GPT-3’s prose output as a form of bullshit, where “bullshit” is a term of philosophical art.

I do not mean “bullshit” as a term of opprobrium. I mean it as a term of philosophical art, where, in terms advanced by Harry G. Frankfurt in a well-known essay from 2009 (which I’ve never read), it designates a kind of language concocted without regard to truth. Bullshit must be coherent, and sound plausible, but its truth is irrelevant to the speaker’s purpose.

Bullshit and intelligence

That’s how the term is used in a recent article, Bullshit Ability as an Honest Signal of Intelligence [1]. Here’s the abstract:

Navigating social systems efficiently is critical to our species. Humans appear endowed with a cognitive system that has formed to meet the unique challenges that emerge for highly social species. Bullshitting, communication characterised by an intent to be convincing or impressive without concern for truth, is ubiquitous within human societies. Across two studies (N = 1,017), we assess participants’ ability to produce satisfying and seemingly accurate bullshit as an honest signal of their intelligence. We find that bullshit ability is associated with an individual’s intelligence and individuals capable of producing more satisfying bullshit are judged by second-hand observers to be more intelligent. We interpret these results as adding evidence for intelligence being geared towards the navigation of social systems. The ability to produce satisfying bullshit may serve to assist individuals in negotiating their social world, both as an energetically efficient strategy for impressing others and as an honest signal of intelligence.

I’m willing to take that at face value. Why? Because in this post I’m not interested in human intelligence, what or why it is.

I’m interested in GPT-3, an extremely powerful AI engine that is able to produce astonishingly convincing prose, so convincing that some have hailed it as a harbinger of AGI, artificial general intelligence. I don’t know about that, mainly because I don’t think AGI is a very useful notion.

But I do believe that GPT-3 represents a major advance, a phase change [2], in the broad research agenda generally known as AI (artificial intelligence). For the purposes of this post, let us say that intelligence is oriented toward truth, that one thing we desire of an artificially intelligent agent is that it at least attempt to tell the difference between truth and falsity. Thus, by definition, GPT-3 cannot be intelligent. Nor, for that matter, can systems that play games, such as chess, Go, or Atari video games. Recent AI systems have attained remarkable success in those areas. It would seem that I’ve defined them out of the sphere of intelligence as well. I note, as well, that in defining intelligence this way, I seem to be contradicting the article I’ve cited.

Perhaps, perhaps not so much. After all, we distinguish between subject matter tests and intelligence tests, which are supposed to be independent of one’s general knowledge. In a subject matter test one offers judgments about the truth value of subject matter items and is asked to produce true statements about the subject matter. Intelligence tests are somewhat different. Might they be about making coherence judgments?

Coherence in discourse

How does GPT-3 work? It is a prediction engine and exists into two (life) phases, training and inference. During training it is fed a string of text, a very extremely humongously long stream of text. It is fed one word and asked to predict the next. If the prediction is correct, GPT-3 leaves its internal language model unchanged. If the prediction incorrect, it revises its model accordingly. It goes on this way until all the text has been consumed and its predictions are successful at a sufficiently high level, whatever that may be.

But how does it make the predictions, and what’s that language model like? Good questions. At the moment I don’t care about the answers. I note, of course, that the people who built GPT-3 know a great deal about the first question, but not so much about the second. Let’s move on.

At this point GPT-3 is ready to be used in inference phase. Basically, one feeds GPT-3 a bit of prose and asks it to continue in the same vein. You have some control over how much output you want from it and you also have control over a parameter called ‘temperature.’ I understand that this is sometimes misleadingly called ‘creativity,’ presumably because, 1) some output appears to be more ‘creative’ than others; the temperature parameter governs this, and 2) a non-technical name for the parameter is difficult to come by. So temperature it is.

Again, moving on, let us look at the constraints imposed on discourse by semantics. By constraint I mean something like this:

Given a string of words from some initial word to some present word N, what are the range of choices for word N+1 such that the discourse will remain coherent?

Thinking informally, it seems to me that the strongest constraints exist within sentences and, within sentences, within phrases. Consider this string:

1) John went to the X.

The number of possibilities for X is quite large: store, bathroom, moon, and so forth. But some things are not allowed: pink, furiously, jumping, at, smell, conjecture, on and on and on.

Constraints loosen up beyond the sentence boundary. What can plausibly follow ‘John went to the store”? Lots of things:

2) He wanted to buy a new toaster.
3) And then he went to the museum.
4) Sally went with him.
5) But it was closed for the day.
6) Unfortunately he had a heart attack on the way.
7) And then the earth stood still.

Note that those various possible follow-on sentences have somewhat different kinds of relationship to the original sentence. We could talk about that, but not here and now. Note, however, that 7 implies a somewhat different kind of discourse from the others. It implies some kind of apocalyptic narrative while the others are utterly mundane, though 6 is tragic.

I made those examples up without any regard for some real stream of events, whether past or ongoing. That last example, 7, implies that I know about reality, but that’s as far as it goes. Nowhere do I assert whether or not anyone of those have ever happened.

In effect, in producing those examples, I was operating in bullshit mode. Well, probably not quite. If, as that article seems to suggest, capacity for bullshit has an adaptive value having to do with social life, then it is probably more accurate to say that I was employing bullshit mode for the purpose of generating linguistic examples.

Now, consider this passage from the article (p. 2): “A growing body of literature has investigated peoples’ receptivity to bullshit, specifically computer-generated pseudo-profound bullshit consisting of random arrangements of superficially impressive words in a way that maintains syntactic structure.” I would prefer to say, however, that these computer-generated strings maintain semantic coherence than that they maintain syntactic structure.

For one thing, I think syntax is something of an illusion. Certainly syntax in the sense that Chomsky argued, where syntax is independent of semantics, that is an illusion. What Chomsky is calling syntax is perhaps better thought of as semantic form. Beyond that, moreover, syntax simply doesn’t exist beyond sentence boundaries (and the notion of a sentence is itself somewhat problematic). But discourse coherence beyond the sentence is real and it is a matter of semantics, but also of pragmatics [it’s complicated].

What I’m arguing, then, is that what GPT-3 is sensitive to is internal coherence. That’s ALL it has to work with. The texts it consumes during the learning phase, after all, does not consist of words in the fullest sense. The texts consist only of word forms, of signifiers. Word meanings, signifieds, exist in the minds of speakers and listeners, of writers and readers; they do not inhere in the physical symbols or sounds themselves and the computer has no access to them.

The language model that GPT-3 creates is thus a model of discourse coherence. And discourse coherence is independent of truth. It seems to me that thinking of GPT-3 as a bullshit engine clarifies what it does. At the same time lowers the stress on our ordinary notions of intelligence and, for that matter, of being human. An artificial human being, that’s rather creepy and threatening, but an artificial bullshit artist, a piece of cake. Maybe we can use it as a chatbot for interfacing between computer systems and real people, which IS, after all, one kind of thing people want to use GPT-3 for.

What about those game engines?

What about those AI engines that play games so well, games such as chess, Go, and video games? Are they not intelligent?

OK, set the idea of intelligence aside. It’s a term much abused and overused.

These games, they aren’t about the world, they aren’t about truth and falsity. They’re about social interaction, chess and Go, or pseudo-social interaction, Atari games, where the machine takes the opponent role. Thinking of GPT-3 as being in the bullshit business, where bullshit is a vehicle for social interaction, then these engines are all in the same general business, social interaction independent of the external physical world.

Both chess and Go are so-called games of perfect information. All information relevant to the state of the game is there on the game board, and both players have full access to it. Atari games are the same.

But not all games are games of perfect information. Poker is a good example. While one could play poker with only two players, it is generally played with four or more. There are many versions. But in all versions each player knows something that the others do not; they know the identity of one or more cards that they hold. The highest level of gamesmanship consists of fooling the other players about your hidden cards while, at the same time, guess theirs. To play poker well one must be able to lie convincingly.

Poker is a game where truth matters. It is thus a very different kind of game from those others. Like all games it is played in a highly constrained artificial world, but unlike games of perfect information, poker allows for, requires, the play of truth and falsity.

Where are we? We are at the beginning of the long process characterizing types of mental or computational processes according to the requirements they must meet. I suppose we could talk about varying kinds of intelligence, but that’s a matter of mere semantics. What we need to talk about are tasks and their requirements.

It seems to me that what GPT-3 does is sufficiently like what bullshit is for the comparison to be worth thinking about. The comparison takes us half a step closer to understanding GPT-3 on its own terms rather than as an approximation to some vague entity we call Artificial General Intelligence.

Food for thought: What is the relationship between coherence and the general idea of a pattern? What I have in mind is that we are pattern-seeking creatures. Are patterns forms of coherence we extract from the otherwise disordered face the world presents to us? It is one thing to recognize a pattern, it is something else to verify it as a true pattern, where truth is understood be a relationship between the pattern and underlying causal mechanisms.

References

[1] Martin Harry Turpin, Mane Kara-Yakoubian, Alexander C. Walker, Heather E. K. Walker, Jonathan A. Fugelsang, Jennifer A. Stolz, Bullshit Ability as an Honest Signal of Intelligence, Evolutionary Psychology, Volume 19. issue 2, Article first published online: May 17, 2021; Issue published: April 1, 2021, https://journals.sagepub.com/doi/full/10.1177/14747049211000317.

[2] I’ve explained my views in some detail in a working paper from last year, GPT-3: Waterloo or Rubicon? Here be Dragons, Working Paper, Version 2, August 20, 2020, 34 pp., https://www.academia.edu/43787279/GPT_3_Waterloo_or_Rubicon_Here_be_Dragons_Version_2.

No comments:

Post a Comment