Pages in this blog

Wednesday, September 20, 2023

Notes on ChatGPT’s “memory” for strings and for events

[Updated on Sept. 21 and 26, 2023]

Here I take a look at the results reported in three previous posts and begin the job of making sense of them analytically. Here are the posts:

What must be the case that ChatGPT would have memorized “To be or not to be”? – Three kinds of conceptual objects for LLMs, New Savanna, September 3, 2023.

To be or not: Snippets from a soliloquy, New Savanna, September 12, 2023.

Entry points into the memory stream: Lincoln’s Gettysburg Address, New Savanna, September 13, 2023.

I set the stage with a passage from F. C. Bartlett’s 1932 classic, Remembering. Then I consider the three cases I laid out in that first post and then go on to look at the results reported in the next two. I conclude by suggesting that we look to the psychological literature on memory and recall to begin making analytic sense of these results. Of course, we also need more observations.

F.C. Bartlett, memory, and schemas

Back in the ancient days of 1932 F. C. Bartlett published a classic study of human recall, Remembering: A Study in Experimental and Social Psychology (1932). He performed a variety of experiments, a number involving the familiar game of having people tell a story from person to person to a chain and then comparing the initial story with the final one. He made the general conclusion that memory is not passive, like a tape-recorder or a camera, but rather is active, involving schemas (I believe he may have been the one to introduce that term to psychology), which shape our recall. A story that corresponds to an existing schema will be more faithfully transmitted than one that does not.

However, I’m not interested in those experiments. I’m interested in something he reports in a later chapter, “Social Psychology and the Manner of Recall,” pp. 264-266:

As everybody knows, the examination by Europeans of a native witness in a court of law, among a relatively primitive people, is often a matter of much difficulty. The commonest alleged reason is that the essential differences between the sophisticated and the unsophisticated modes of recall set a great strain on the patience of any European official. It is interesting to consider an actual record, very much abbreviated, of a Swazi trial at law. A native was being examined for the attempted murder of a woman, and the woman herself was called as a necessary witness. The case proceeded in this way:

The Magistrate: Now tell me how you got that knock on the head.

The Woman: Well, I got up that morning at daybreak and I did... (here followed a long list of things done, and of people met, and things said). Then we went to so and so’s kraal and we... (further lists here) and had some beer, and so and so said....

The Magistrate: Never mind about that. I don’t want to know anything except how you got the knock on the head.

The Woman: All right, all right. I am coming to that. I have not got there yet. And so I said to so and so... (there followed again a great deal of conversational and other detail). And then after that we went on to so and so’s kraal.

The Magistrate: You look here; if we go on like this we shall take all day. What about that knock on the head?

The Woman: Yes; all right, all right. But I have not got there yet. So we... (on and on for a very long time relating all the initial details of the day). And then we went on to so and so’s kraal.. .and there was a dispute ... and he knocked me on the head, and I died, and that is all I know.

Practically all white administrators in undeveloped regions agree that this sort of procedure is typical of the native witness in regard to many questions of daily behaviour. Forcibly to interrupt a chain of apparently irrelevant detail is fatal. Either it pushes the witness into a state of sulky silence, or disconcerts him to the extent that he can hardly tell his story at all. Indeed, not the African native alone, but a member of any slightly educated community is likely to tell in this way a story which he has to try to recall.

What’s going on here? Keep in mind that the issue is not word-for-word recall. Rather, it is the incidents being recalled, in whatever verbal form is convenient. Why can’t the witness simply begin talking about the incident in question? And when, when asked to get on with it, must the witness return to be beginning of the day?

It's as though the memory stream of a day’s events can only be entered at the beginning of the day, and not at arbitrary points within the day. I note that we are dealing with people who do not have clocks and watches they can use to mark events during the day. Of course it’s not enough to have a watch, you must also take note of it at various times during the day. That will give you various points of entry into the memory stream.

This sort of thing is also quite familiar to me as a musician. While I have learned to read music, and have done so often, I am an improvising (jazz) musician and am quite used to playing things “by ear.” If I am practicing a melody by ear, and get lost at some point, I may not be able to restart at the point where I broke off. Rather, like the witness testifying in court, I have to go back to the beginning – in this case, the beginning of the melody rather than the beginning of the day.

To be or not, and beyond

Early in September I asked the question: “Given that [the LLM underlying ChaGPT] has been trained to predict [only] the next word, what MUST have been the case in order to ChatGPT to return the whole soliloquy when given the opening six words?” It must have encountered that soliloquy many different times in its training corpus. That’s the only way that predicting that exact sequence, word after word, not result in training loss.

However, given Bartlett’s observations about human memory, ChatGPT’s ability to rattle off a whole sequence word for word does raise a question. Is it just passively stringing one word after another, or does it recognize internal structure? How can we figure out which is the case?

I want to set those questions aside for a moment, but I will return to the question. Though interesting, such specific sequences are relatively rare. It is much more common for the training corpus to have many texts about the same event or set of events, but not expressed in the exact same words. Thus I have ChatGPT the prompt, “Johnstown flood, 1889.” Note that I specified the year because Johnstown (PA) was subsequently flooded in 1937 and 1977. But it’s the 1889 flood that made the national news, prompting national concern.

ChatGPT responded in a way I thought reasonable. Since I had grown up in Johnstown and was familiar with the flood, I didn’t bother to check the Chatster’s reply against reliable sources. But, for all I know, the Chatster was giving me some specific text word-for-word, implying that there was some specific text about the flood that had appeared many times in the training corpus. While that didn’t seem likely, I had to check. Later that same day I opened a new session and gave ChatGPT the same prompt. Again, it gave me a reasonable reply, but one that was expressed differently from the earlier one. This reply gave the sequence of events in seven numbered paragraphs. The earlier reply did not have a sequence of numbered paragraphs.

So we’ve got two cases so far: 1) a specific sequence of words that is repeated when prompted for, and 2) and flexible recall of an event using different word sequences in different sessions. There is a third case to consider: 3) and event that is in the training corpus, but in so very few times, perhaps only once, that it doesn’t register in ChatGPT’s model as a specific event. The text serves as evidence about word usage, but otherwise has no effect on the model.

Without access to the training corpus, how do you identify such things? You can’t. But you can make a plausible. I’d attended a Dizzy Gillespie concert back in the mid-1980s which I’d written about in two places which could have been in the training corpus. I prompted ChatGPT with that concert, naming the venue and city where the concert took place in addition to the artist (Diz). Apparently, it had no record of it.

Let me offer you a somewhat different example of this last case. I’m currently interested in a mathematician named Miriam Lipschutz Yevick. She published a paper back in 1975 (Holographic or fourier logic), which I think is interesting and important, but which has been forgotten. The paper is available on the web, and I have blogged about it. A few other papers are also available, as well as an obituary, all before ChatGPT’s cut-off point. I’ve asked Chatster about Yevick in several different sessions but it knows nothing about her. 

Let’s think about this a bit. GPT-3.5, the large language model underlying ChatGPT, may have been trained on (almost, a big chunk of) the entire internet, but its model does not incorporate everything that it has been trained on. It is abstracting over those texts, not memorizing them in any ordinary sense of the word. When a particular text occurs word-for-word many times and in various contexts, GPT-3.5 will learn it word-for-word; think of that as, in effect, an abstraction over those many contexts. When a particular topic, that is, a particular congeries of terms, occurs many times and in various contexts, GPT-3.5 abstracts over than congeries and meshes them together so they are mutually available. If neither of these things occurs to something that appears in a text, then that something just dissolves into the net.

Thus we’ve got three cases: 1) word-for-word recall of a text, 2) flexible recall of a specific topic, and 3) no recall of a topic that was in its training corpus. I want to return to the first case, where ChatGPT is generating a fixed text, and see what, if anything, we can learn about how it does it.

The inner structure of a text

While I did undertake further investigation of Hamlet’s soliloquy (To be or not: Snippets from a soliloquy), I want to look at work that I did with Lincoln’s Gettysburg Address: Entry points into the memory stream: Lincoln’s Gettysburg Address, which is a bit cleaner and systematic. I gave ChatGPT three sets of four prompts:

  • Sentence initial prompts: Four from the beginning of a line,
  • Syntactically consonant prompts: Four from the interior of a line, but that respect syntactic boundaries, and
  • Syntactically unruly prompts: Four that ran across syntactic boundaries.

ChatGPT correctly identified the “Gettysburg Address” as the source of all of the prompts in the first category. But it mislocated two of them, asserting that they were from the opening line, when they were not.

Its response to the second category was peculiar. It got one correct and missed one. In the other two cases, it identified the “Gettysburg Address” as the source. But when it identified the context, that segment didn’t include the prompt. In both of those cases the segment was from the end of the text.

So, there are times when it correctly identifies the text where the prompt comes from, but it doesn’t correctly locate the prompt in that text. Identifying and locating seem to be separate operations. Interesting.

What happened in the last case, where none of the prompts lined up with syntactic boundaries? It was unable to link any of them with the “Gettysburg Address.” However, in the last three cases I thought to offer a further prompt, that the passage was “a very well-known speech.” That vague bit of information was enough to send it to “Gettysburg Address.” In the case of the second prompt, ChatGPT provided context that did not contain the prompt. In the case of the third prompt, while the quoted context did contain the prompt, that quoted text was half the speech. In the last case (“in vain—that this”), ChatGPT offered a context that contained the last part of the prompt (“that this”), but not the first. Again, locating the text is one kind of task, locating the prompt within the text is different. 

It is possible for one (relatively short) text to evoke another (somewhat longer) text. The mechanism making the connection need not be able to over-look and examine the two texts in order to do this. They just need to ‘contact” one another. But locating the first text within the second, that requires the mechanism to ‘step-back’ for the two texts and explicitly compare them and that’s a more complex operation.

What’s going on? Associative memory

I don’t really know, but I’m willing to speculate just a bit. Let’s recall the situation of a witness in colonial court who can only some event during the day by starting the act of recollection with the beginning of the day. I likened that to being able to play a melody “by ear” only if one starts at the beginning. Those seem like being able to recall Hamlet’s full soliloquy when given only the first six words or being able to call the Gettysburg Address when given the first six words. If we were dealing with a human, I’d say we’ve got associative recall, where the whole of some mental object is recalled when prompted with a part.

ChatGPT isn’t a human being, but it was trained on texts produced by humans. My default way to proceed is to treat its output as though it came from a human – with some reservations, of course, it’s neither sentient or conscious. It’s not that I actually believe or hope that its inner mechanisms are something like ours, but simply that I have to start thinking somewhere. Starting with human models – which I will get to in a second – is a reasonable thing to do.

For the purposes of discussion, I am going to treat prompts that begin a particular word-for-word text as a special case of prompts that are consistent with syntactic boundaries. That takes care of those cases, in both my work on Hamlet’s soliloquy and Lincoln’s address, where the prompt comes from within the text and is consonant with syntactic boundaries. Yes, I know, beginning of the text, inside the text, they’re different cases, but right not I’m just trying to get through this. It is clear the ChatGPT attends to linguistic boundaries, otherwise it would never become very good at word prediction, for such boundaries are ubiquitous in the texts in its training corpus.

What do we do about prompts like “Johnstown, 1889,” which don’t seem to be the beginning of any specific text (nor do they line up with a phrase in some well-known text)? What I’m going to do is treat them like those “syntactically unruly” prompts I used with the Gettysburg Address, such as “fought here have thus” and “in vain–that this.” Recall that, in the case of those unruly prompts, ChatGPT was able to identify the source text once it ‘knew’ the prompts were from a famous speech – perhaps there’s a ’famous speech’ region in its activation space. I am going to treat these two kinds of cases as involving associative memory. In one case (Johnstown) a more or less arbitrary phrase is linked to a topic, while in the other case a more or less arbitrary phrase is linked to a specific text.

Perhaps not so arbitrary, now that I think of it. Quite a lot of research has been done using optical holography as a model for both associative memory in general, but also verbal memory. Holographic representation has two characteristics that make it pertinent now: 1) given a part of some item in memory, it can return the whole item, and 2) given a low-resolution version of an item, it can return the full item. The various phrases I’ve taken from these texts, Hamlet’s soliloquy, and Lincoln’s Gettysburg address, are parts of those whole texts. The prompt, “Johnstown flood, 1889,” is, in effect, a low-resolution representation of a fairy extensive and complex set of events. That prompt contains three crucial pieces of information which, taken together, identify that set of events and distinguishes them for similar sets of events. “Johnstown” specifies a geographic location; “flood” indicates a kind of event; and “1889” identifies the year. I note that Johnstown had notable floods in 1936 and 1977 as well. Thus “Johnstown flood” would not necessarily have picked out the 1889 flood, but the addition of the year supplied the necessary specificity.

“OK, so we’re dealing with associative memory,” you say. “Big deal. That still doesn’t tell us much about what’s going on inside the Chatster.”

No, it doesn’t. But it tells us where to start looking, for there is a large and increasingly mathematically and computationally sophisticated literature on associative recall. At this point I don’t know much more about that literature than the fact that it exists, so I’m not going to propose a specific model. What I will do, however, is offer some suggestions about where to start looking.

Here are three places to start:

Karl H. Pribram, The Neurophysiology of Remembering, Scientific American, Vol. 220, No. 1 (January 1969), pp. 73-87, https://www.jstor.org/stable/24927611

While the idea of neural holography had been pioneered by others earlier in the 1960s, this is where I first learned about it and is perhaps the first article that brought the idea to a large, highly-educated audience. In those days Scientific American published articles that had real intellectual meat to them. On that account, this article is well worth your attention.

H. C. Longuet-Higgins, D. J. Willshaw and O. P. Buneman, Theories of associative recall, Quarterly Reviews of Biophysics, Volume 3 , Issue 2 , May 1970, pp. 223 – 244, DOI: https://doi.org/10.1017/S0033583500004583

Yes, I know, 1970 is practically the Cambrian Era, but Longuet-Higgins was an important theorist and this article references some important, if early, work.  He covers not only 2-D holography in the optical domain, but also 1-D holography in the temporal domain. The article discusses four other kinds of models. (Also, Longuet-Higgins coined the term “cognitive science.”)

Michael N. Jones and Douglas J. K. Mewhort, Representing Word Meaning and Order Information in a Composite Holographic Lexicon, Psychological Review, 2007, Vol. 114, No. 1, 1-37. DOI: https://doi.org/10.1037/0033-295X.114.1.1

2007, now we’ve made it into the Jurassic. But sections of this article read a bit like we’re discussing transformers, where we’re dealing with both word meaning and context of occurrence. Here’s the abstract:

The authors present a computational model that builds a holographic lexicon representing both word meaning and word order from unsupervised experience with natural language. The model uses simple convolution and superposition mechanisms (cf. B. B. Murdock, 1982) to learn distributed holographic representations for words. The structure of the resulting lexicon can account for empirical data from classic experiments studying semantic typicality, categorization, priming, and semantic constraint in sentence completions. Furthermore, order information can be retrieved from the holographic representations, allowing the model to account for limited word transitions without the need for built-in transition rules. The model demonstrates that a broad range of psychological data can be accounted for directly from the structure of lexical representations learned in this way, without the need for complexity to be built into either the processing mechanisms or the representations. The holographic representations are an appropriate knowledge representation to be used by higher order models of language comprehension, relieving the complexity required at the higher level.

No comments:

Post a Comment