Pages in this blog

Friday, April 14, 2023

LLMs and hallucination, like white on rice?

It’s a matter of history, I suppose.

We know that LLMs confabulate. They make stuff up. That’s what’s known as hallucination. Here’s the introductory section of the Wikipedia entry for Hallucination (artificial intelligence):

In artificial intelligence (AI), a hallucination or artificial hallucination (also occasionally called delusion) is a confident response by an AI that does not seem to be justified by its training data. For example, a hallucinating chatbot with no knowledge of Tesla's revenue might internally pick a random number (such as "$13.6 billion") that the chatbot deems plausible, and then go on to falsely and repeatedly insist that Tesla's revenue is $13.6 billion, with no sign of internal awareness that the figure was a product of its own imagination.

Such phenomena are termed "hallucinations", in analogy with the phenomenon of hallucination in human psychology. Note that while a human hallucination is a percept by a human that cannot sensibly be associated with the portion of the external world that the human is currently directly observing with sense organs, an AI hallucination is instead a confident response by an AI that cannot be grounded in any of its training data. AI hallucination gained prominence around 2022 alongside the rollout of certain large language models (LLMs) such as ChatGPT. Users complained that such bots often seemed to "sociopathically" and pointlessly embed plausible-sounding random falsehoods within its generated content. Another example of hallucination in artificial intelligence is when the AI or chatbot forget that they are one and claim to be human.

By 2023, analysts considered frequent hallucination to be a major problem in LLM technology.

Note the dating: “...gained prominence around 2022...” The term had been in use earlier.

I first encountered it in Language Log, a group blog about language. A post from April 15, 2017, What a tangled web they weave, may not be the earliest example there, but it’s probably close to it. It is about how the current verion of Google Translate dealt with repetitions of character sequences from non-Latin ortographic systems. Here’s the first example from that post:

ュース repeated gives successively:

Juice
News
Auspicious
Hooooooooooooooooooooooooooo
Yooooooooooooooooooooooooooooo
Yu Susui Suu Suu Su
Yu Susui Suu Suu Suu Su
Susui Suu Suu Suu Suu Su
Susui Suu Suu Suu Suu Suu Su
Susui Suu Suu Suu Suu Suu Suu Su
Susuue with the airport
It is a good thing to see and do.
It is a good idea to have a good view of the surrounding area.
It is a good thing for you to do.
It is good to know the things you do not do.
It is good to know the things you do not mind.
It is a good idea to have a good view of the surrounding area.

This is a similar, but not quite the same phenomenon. I have no idea whether or not there is a genealogical relationship between this earler use of “hallucination” and the current use in the context of LLMs.

It does seem to me, however, that hallucination is the ‘natural’ – by which I mean something like default – state of LLMs. To be sure, that training corpus has all kinds of texts, true stories, fantasies, lies, valid explanatory material, and so forth, but it has no way of making epistemological distinctions among them. It’s all text. And “true,” “false,” “lying,” “fantasy,” “imaginary,” “dream,” “verified,” and so forth, they’re all just words. While it can work out the relationships those words have among themselves and with other words, it has no way of linking any words or combinations of words to the external world. Since it has no direct contact with reality, it has no way of calibrating the relationship between texts and reality.

What the LLM has learned is the system of language, but not how that system is related to the world. Without knowledge of that relationship, all it can do is reel off texts that are consistent with the system. Moreover, I strongly suspect – for I’ve not thought it through – that there is more to the relationship between language system and the world than a simple collection of pointers.

It is, I suppose, a peculiar kind of Cartesian being. And it has no way of knowing whether the prompts we give it are the work of a malignant being or the work of angels.

I note, finally, that we devote considerable effort to keeping human language, and texts, aligned with the world. And these efforts tend to be bound to cultural formations, some of which are anchored to ethnic and national groups, many more of which are anchored to specific disciplinary practices. If “hallucination” were not so easy for humans, conspiracy theories and millenial cults would not flourish so easily, would they?

There more I think of it, the more it seems to me that eliminating these hallucinations will not be an easy task. It may, in fact, and for all practical purposes, be the major task of so-called AI alignment.

See my earlier post, There’s truth, lies, and there’s ChatGPT [Realms of Being].

No comments:

Post a Comment