Friday, December 16, 2022

Abstract concepts and metalingual definition: Does ChatGPT understand justice and charity?

Our conceptual worlds are filled with abstract concepts, such as truth, love, beauty, and justice. Such concepts cannot be defined in physical terms. You can’t see them, taste them, hear them, touch them, or smell them. And yet for human society they are as real as sticks and stones, thunder and lightning, a horse, a sparrow, cherry blossoms, or Mt. Fuji.

We can learn about abstractions through stories that exhibit them. Rene Girard’s concepts of mimetic desire and of sacrifice are abstract. They characterize patterns of action that people take in their interactions with one another. Whether he realized it or not – probably not – Steven Spielberg showed us those concepts in Jaws. And I demonstrated that ChatGPT was able to recognize them, to interpret the film as exhibiting them: Conversing with ChatGPT about Jaws, Mimetic Desire, and Sacrifice.

* * * * *

I begin this post by examining ChatGPT’s ability to work with the concept of justice. First I ask it to define justice, and then to say a little about how Plato treated it in The Republic. Then we see how Chat deals with justice in stories, creating stories about justice, but also recognizing justice, or its lack, in stories. I then introduce Chomsky’s well-known, albeit somewhat obscure, distinction between competence and performance. Next, another example, charity, which I use to set-up a discussion of semantic or cognitive networks, a formalism from the now ‘classic’ symbolic AI and computational linguistics from the 1960s, 70s, and 80s. Would that notation be suitable for characterizing some (non-trivial) aspect of ChatGPT’s behavioral competence? I conclude by tempting the reader – that’s you – to consider the implications, suggesting that Neil Stephenson’s The Diamond Age: Or, A Young Lady's Illusttrated Primer might be a useful source of insight.

What is justice?

Let’s start by asking ChatGPT to define justice, which it does. I then asked about Plato’s treatment of justice in The Republic.

What is justice?

Justice is the concept of fairness and equality. In the legal context, it refers to the fair and proper administration of laws. In a broader sense, justice is the ideal state in which individuals are treated equally and without prejudice, and in which the laws and institutions of society are fair and just. It is often considered to be one of the fundamental principles of society, and it is an important concept in many ethical and philosophical systems.

What does Plato say about justice in The Republic.

In Plato's dialogue The Republic, the character of Socrates discusses the concept of justice with several other characters. Plato's view of justice is closely tied to his concept of the ideal state, in which individuals are divided into different classes based on their natural abilities and inclinations. According to Plato, the just state is one in which each individual performs the function for which they are best suited, and in which the rulers are those who are most wise and just.

Socrates argues that justice is the quality that allows individuals to perform their proper functions and to live a good life. He also argues that justice is necessary for the health and stability of the state, and that a just state is one in which everyone works together for the common good. Furthermore, Plato's Socrates asserts that justice is an inherent virtue that is present in the soul of every individual, and that it can be nurtured and developed through education and experience.

Overall, Plato's view of justice is focused on the idea that individuals should be guided by reason and that they should strive to live in harmony with one another and with the laws of the state. He believes that justice is essential for the well-being of both individuals and society as a whole.

On the response to the Plato prompt, though I don’t actually know this, I assume that ChatGPT is summarizing and/or paraphrasing material that it found on the web as this is perhaps the central topic of discussion about The Republic. That is to imply, ChatGPT did not arrive at those three paragraphs by applying a definition of justice directly to Plato’s text. And that is fine with me. Competent summary and paraphrase are worthwhile intellectual tasks.

I tweeted this interaction to a friend of mine, John Holbo, who teaches philosophy at the National University of Singapore. Working with Beth Waring as translator, he’s produced Reason and Persuasion: Three Dialogues By Plato: Euthyphro, Meno, Republic Book I. Here’s what he said:

The next day, using a story as a prompt

The next day I decided to try something different. In the previous session I’d started by prompting ChatGPT with an abstract concept, justice, and then asked him to expand upon that. Let’s see if we can work it in the opposite direction, starting with a story about injustice and asking if ChatGPT can identify it as such, and the produce a new story in which justice is served.

I used a story that Chat had produced in an earlier session. While Chat keeps track of everything that happens within a session, it doesn’t retain what happened in previous sessions – though I’m pretty sure that OpenAI is keeping records of all these interactions so that it can do something or other with them. So, Chat is facing this new prompt with a properly empty “mind.”

Notice that the first sentence in my prompt is ungrammatical. That was an accident. ChatGPT was unfazed.

In the following story, do we justice being served?

Once upon a time, in a busy city, there was a fire that broke out in a tall office building. People were panicking and trying to escape, but the fire was spreading quickly and the exits were blocked.

Just when it seemed like all hope was lost, a brave and selfless hero stepped forward to save the day. He bravely fought his way through the flames and smoke, risking his own life to rescue as many people as he could.

Despite the danger, the hero managed to lead many people to safety, and the fire department arrived just in time to put out the blaze. Thanks to the hero's quick thinking and bravery, no lives were lost.

But instead of thanking the hero and giving him the recognition he deserved, the media and the public focused on the wealthy CEO of the company who was inside the building at the time of the fire. They praised him for being "brave" and "selfless," despite the fact that he had done nothing to help anyone.

The hero was heartbroken and disillusioned by the lack of gratitude and fairness. He had risked his life to save others, and yet he was being ignored and cheated out of the fame and recognition he deserved.

But despite the unfairness of it all, the hero held his head high and continued to be a shining example of courage and selflessness. He knew that the true reward for his actions was not fame or recognition, but the satisfaction of knowing that he had made a difference and saved lives.

Based on the story, it does not seem that justice is being served. The hero performed a selfless and courageous act by risking his own life to save others, but instead of being recognized and thanked for his actions, he was ignored and overshadowed by the media and public's focus on the wealthy CEO, who did not do anything to help. This is not fair to the hero, as he deserves to be recognized and celebrated for his bravery and the difference he made. The hero's true reward of knowing that he saved lives is important, but it is not a substitute for the recognition and appreciation that he deserves.

What is justice?

Justice is the principle of fairness and the concept of moral rightness. It refers to the fair treatment of individuals and the upholding of their rights and duties within a society. In legal terms, justice refers to the fair and impartial administration of the law, where all individuals are treated equally under the law and have the right to a fair and unbiased legal process.

Justice is an important concept in many societies, as it helps to ensure that individuals are treated fairly and with respect, and that there is a sense of balance and fairness in the way that laws and rules are enforced. It is often considered to be a fundamental principle of a just and fair society, and it is often upheld through the legal system and other institutions that are responsible for promoting and enforcing justice.

Would you please change the previous story so that justice is served?

Sure, here is a revised version of the story where justice is served:

Once upon a time, in a busy city, there was a fire that broke out in a tall office building. People were panicking and trying to escape, but the fire was spreading quickly and the exits were blocked.

Just when it seemed like all hope was lost, a brave and selfless hero stepped forward to save the day. He bravely fought his way through the flames and smoke, risking his own life to rescue as many people as he could.

Despite the danger, the hero managed to lead many people to safety, and the fire department arrived just in time to put out the blaze. Thanks to the hero's quick thinking and bravery, no lives were lost.

The hero was celebrated and recognized for his bravery and selflessness. The media and the public recognized his heroism and praised him for his actions. The hero was hailed as a true hero, and his bravery and quick thinking were recognized and celebrated.

The wealthy CEO of the company, who had been inside the building at the time of the fire, also recognized the hero's bravery and thanked him for his actions. The hero was deeply touched by the CEO's appreciation and felt grateful to have been able to make a difference and save lives.

Justice was served in this story, as the hero was recognized and celebrated for his bravery and selflessness, and the CEO also recognized and thanked him for his actions. The hero's true reward was not only the satisfaction of knowing that he had made a difference and saved lives, but also the recognition and appreciation he received for his actions.

Thank you. You have been most helpful. Perhaps you should consider becoming a tutor for young students.

Thank you for your kind words! As an artificial intelligence, I am not able to become a tutor for young students in the same way that a human can. However, I am always here to help answer questions and provide information on a wide range of topics. If you have any other questions or need further assistance, please don't hesitate to ask.

Then, after I’d exited the session and copied into the document where I’m collecting these dialogs (138 pages, 44519 words), I noticed that ungrammatical sentence. Did the Chatster no “notice” it, or did in not “care”? So I asked about it:

Is the following sentence grammatical: In the following story, do we justice being served?

No, the sentence is not grammatically correct as it is written. Here is a corrected version: "Is justice being served in the following story?"

Very good. Thank you.

You're welcome! Is there anything else I can help with?

That's all for now as I notice that you are busy, so I'll let others dialog with you.

I'm here to help, so please don't hesitate to ask if you have any more questions or need further assistance. I'm not "busy" in the same way a person would be, so I'm always available to help.

Competence vs. Performance

What are we to make of this? Except for the case where I prompted Chat with the story, I’ve seen these things before, defining abstract concepts and recognizing them in texts. This is not accidental behavior, no 1000s of monkeys pecking at typewriters, no stochastic parrots. This is deliberate – whatever that can mean in the case of behavior from a most ingenious machine, a “miracle of rare device,” to swipe a phrase from Coleridge.

Let’s turn to Noam Chomsky for some insight. In his justly famous book, Aspects of the Theory of Syntax (1965), he distinguished between competence and performance in the study of language. Here’s what David Hays wrote in an article we co-authored in the ancient days (“Computational Linguistics and the Humanist,” Computers and the Humanities, Vol. 10. 1976, pp. 265-274):

To describe an assembled bicycle is one thing: to describe the assembly of bicycles is another. The assembler must know what is in the blueprint (competence), but further needs the skill to take the parts in order, place them deftly, fasten them neatly (performance). In actuality, of course, the assembler may never have seen the blueprint, nor need the performance of a speaker or hearer include in any physical sense the grammar that the linguist offers as the blueprint of a language.

That distinction allowed Chomsky to treat syntax as being formally well-formed, in the manner of a logical or mathematical expression, while making room for the fact that actual speech is often ill-formed, full of interruptions and hesitations, and incomplete. Those imperfections belong to the realm of performance while syntax itself is in the realm of competence.

What makes the distinction obscure is that Chomsky did not offer nor was he even interested in a theory of performance. Competence is all he was interested in, and his account of that competence took a form that, at first glance, seemed like an account of performance. But his generative grammar, with its ordering of rules, is a static system. That ordering is about logical priority, not temporal process. This comes clear, however, only when you attempt to specify a computational process that applies the grammar to a language string. That is to say, only when you try to design system that performs that competence.

It is not, however, Chomsky’s linguistics that interests me. It’s ChatGPT’s abilities, its competence. What it does when running in inference mode is a matter of performance. That is more than a little obscure at this point. It's difficult to pop the hood on ChatGPT and look around. Oh, you can do it, and people are, but just how do you examine the weightings on 175 billion parameters? Why not start with some idea about what’s going on inside, some idea of competence and go look for that?

The fact is, we face the same situation with respect to the human brain that we do with respect to the neural net driving ChatGPT. Both are opaque. In the case of the human brain, scientists have devoted decades to understanding it, and a great deal of progress has been made. But we still can’t open it up and observe what is happening at the neuronal level when we talk and listen, read and write, and we may never be able to do that. Linguists know perfectly well that it is the brain that is responsible for linguistic behavior, but they ... you know, it is not simply that they don’t have direct access to the brain at the microlevel. Even if they did, they would not know what to make of what they could observe there without some prior description and analysis of language strings themselves. You need to know something at one level in order to discover something at a deeper level.

AI researchers face the same situation. In order to discover what various the various model parameters are doing, they need some conception of what tasks they have to perform. That is what I am examining in this document. I am taking syntactic capacity, the word-by-word structure of sentences, and setting it aside that I can examine how it combines sentences to achieve various discursive ends. In this segment we have been looking at metalingual definition. Earlier we had examined conversational turn-taking and still earlier, interpreting movies.

Marr’s levels

Let’s take the argument one step further. By asserting that understanding accounts of ChatGPT’s competence can point us to features to look for in the weights of those 175 billion parameters, I seem to be implying that those weights are where the real action is, that competence-level accounts function only as an aid in finding them and are otherwise dispensable. I don’t believe that. I believe they are important in themselves.

Consider an analogy with, say, a word processor. The functions of the word processor would be specified in a design document which would contain a mixture of prose, diagrams, and (most likely) pseudocode. That design is, in turn, implemented in a high-level programming language, possibly C++, any of a number of languages would do the job. There is thus a degree of independence between the design and that programming language used to realize the design. Similarly, a high-level programming language is likely to be implemented on a number of different platforms, using the assembly language appropriate for the platform’s CPU. It isn’t until you have a specific CPU, with its associated memory, that you actually have voltages zipping through circuits, which is ultimately where the action is.

But, and here’s the point, you cannot understand how a word processor works by examining those voltages any more than you can design one at that level. There are several levels of construction between the electrons-in-circuts level and the end-user functional level. These levels of organization are independent of one another.

Back in the 1970s David Marr and Poggio made a similar argument about understanding human cognitive systems. Marr reprised the argument in his influential 1982 book on vision (Vision: A Computational Investigation into the Human Representation and Processing of Visual Information), where he talks about three levels: computation, algorithmic, and implementation/physical. Since then Marr’s formulation has been subject to considerable discussion and revision, but we need not worry about that for our current purposes. What is important is the principle, that higher levels of organization are implemented by lower in lower levels.

In our current case we are considering a computer system that is designed to mimic human linguistic behavior. The system is implemented in an artificial neural network (ANN) which in turn runs on appropriate hardware. Having now examined the output from this system, ChatGPT, with considerable care, I have concluded that the system exhibits a higher level of organization than what is visible in the weights on those 175 billion parameters. By examining those those parameters, however, we should be able to find operational patterns that are being recruited to realize the structures we have been examining in this paper.

Charity, another abstract concept

Let’s consider another example. Back in the 1970s I studied with the late David G. Hays, who was a first-generation researcher in computational linguistics. At that time he was particularly interested in abstract concepts. He conjectured that they took the form of patterns over stories. He proposed a mechanism he called metalingual definition, after the metalingual function of language as identified by Roman Jakobson. We used a cognitive or semantic network notation to represent these patterns; you can find example in his paper, On "Alienation": An Essay in the Psycholinguistics of Science.

Hays used charity as his prototypical example: Charity is when someone does something nice for someone else without thought of reward. Here is an example of ChatGPT’s output given to me by Richard Fritzon, who studied with Hays at the same time I did:

If I give someone money and they give me something in return, is that an act of charity?

It depends on the intention behind the exchange. If the money is given with the expectation of receiving something in return, then it is not typically considered an act of charity. An act of charity is typically defined as a voluntary act of giving, without expecting anything in return.

For example, if you give someone money to buy food or clothing because you want to help them in their time of need, and you do not expect anything in return, then that would be considered an act of charity. On the other hand, if you give someone money in exchange for a product or service, then that would not be considered an act of charity.

It is important to note that the act of giving itself is not the only factor in determining whether an exchange is an act of charity. The intention behind the giving is also important. If the money is given with the expectation of receiving something in return, then it is not typically considered an act of charity.

If I give someone money and they thank me for it, is it still an act of charity?

Yes, it is still an act of charity if you give someone money and they thank you for it. An act of charity is typically defined as a voluntary act of giving, without expecting anything in return. In this case, the fact that the recipient thanked you for the money does not change the nature of the act, as long as you did not give the money with the expectation of receiving thanks in return.

The intention behind the giving is the key factor in determining whether an exchange is an act of charity. If you give the money with the goal of helping the recipient in their time of need, and you do not expect anything in return, then that would be considered an act of charity. On the other hand, if you give the money with the expectation of receiving something in return, such as thanks or recognition, then it would not be considered an act of charity.

It seems to me that ChatGPT has a pretty decent command of “charity” as it is characterized by Hays’ definition.

What’s going on? ChatGPT is powered by an artificial neural net having 175 billion parameters. It would seem that some of those parameters represent, not meanings in word tokens, but patterns over those tokens. Roughly speaking, those patterns take the form of a frame that organizes a structure of slots. Those slots can be filled, either by other frames of an appropriate kind, or by tokens that meet the required specifications. Would semantic networks serve as a useful language for characterizing those patterns of frames and slots?

What I’ve just described – and I’ve got many more examples like it – looks like the basis for developing a competence grammar of (some aspect of) ChatGPT’s performance. Just how far we want to take things, how far we can take things, that remains to be seen. But it is certainly worth exploring.

Does ChatGPT understand human values?

Justice and charity represent moral values. As such they are important in human life and thus of great concern to researchers in AI safety: How do we align the values of an artificial intelligence with those of humans? Are we to understand that, pending further exploration, ChatGPT’s values are aligned with ours?

I don’t know. Let’s consider some comments Yejin Choi made in a recent interview:

So what’s most exciting to you right now about your work in A.I.? I’m excited about value pluralism, the fact that value is not singular. Another way to put it is that there’s no universal truth. A lot of people feel uncomfortable about this. As scientists, we’re trained to be very precise and strive for one truth. Now I’m thinking, well, there’s no universal truth — can birds fly or not? Or social and cultural norms: Is it OK to leave a closet door open? Some tidy person might think, always close it. I’m not tidy, so I might keep it open. But if the closet is temperature-controlled for some reason, then I will keep it closed; if the closet is in someone else’s house, I’ll probably behave. These rules basically cannot be written down as universal truths, because when applied in your context versus in my context, that truth will have to be bent. Moral rules: There must be some moral truth, you know? Don’t kill people, for example. But what if it’s a mercy killing? Then what? [...]

Is the ultimate hope that A.I. could someday make ethical decisions that might be sort of neutral or even contrary to its designers’ potentially unethical goals — like an A.I. designed for use by social media companies that could decide not to exploit children’s privacy? Or is there just always going to be some person or private interest on the back end tipping the ethical-value scale? The former is what we wish to aspire to achieve. The latter is what actually inevitably happens. In fact, Delphi is left-leaning in this regard because many of the crowd workers who do annotation for us are a little bit left-leaning. Both the left and right can be unhappy about this, because for people on the left Delphi is not left enough, and for people on the right it’s potentially not inclusive enough. But Delphi was just a first shot. There’s a lot of work to be done, and I believe that if we can somehow solve value pluralism for A.I., that would be really exciting. To have A.I. values not be one systematic thing but rather something that has multidimensions just like a group of humans.

As for Nick Bostrom and his infamous paperclip machine:

Like the Nick Bostrom paper clip example, which I know is maybe alarmist. But is an example like that concerning? No, but that’s why I am working on research like Delphi and social norms, because it is a concern if you deploy stupid A.I. to optimize for one thing. That’s more of a human error than an A.I. error. But that’s why human norms and values become important as background knowledge for A.I. Some people naïvely think if we teach A.I. “Don’t kill people while maximizing paper-clip production,” that will take care of it. But the machine might then kill all the plants. That’s why it also needs common sense. It’s common sense not to kill all the plants in order to preserve human lives; it’s common sense not to go with extreme, degenerative solutions.

Finally, let us remind outselve that, at the moment, we are talking about devices that have no purchase on the external world nor that are responsible for their own upkeep. They just generate language strings in response to prompts. To be sure, what they do is quite impressive – that’s why I am writing this paper – but can we really say that they have any values at all, whether aligned with human values or not, just because they can produce a convincing simulacrum of ethical conversation? Perhaps being in the world, in the deepest possible sense, is a prerequist for having values.

This seems consistent with the general argument Brian Cantwell Smith is making in The Promise of Artificial Intelligence, which I am currently reading:

No matter how otherwise impressive they may be, I believe that all existing AI systems, including contemporary second-wave systems [he means machine learning, etc.], do not know what they are talking about. It is not that we humans cannot interpret their outputs as being about things that matter to us. But there is no reason to suppose, and considerable reason to doubt, that any system built to date, and any system we have any idea how to build, ⌈knows⌉ the difference between: (i) its own (proximal) state, including the states of its representations, inputs and outputs; and (ii) the external (distal) state of the world that we at least take its states, its representations and those inputs and outputs, to represent. And it is those external states of affairs that they are talking about (remember claim P2, about semantic interpretation).

What is required in order for a system to know what it is talking about? What is it that present day systems lack, such that they do not? That is the question to which the rest of the book slowly develops an answer. I can say now that it will at least require authenticity, deference, and engagement in the world in which that which is talked about exists—and that neither interpretability nor “grounded interpretation” will suffice.

No comments:

Post a Comment