It is my impression that, unless someone has had experience with distributed accounts of word meaning, they’re likely to think of word meaning as an enclosed “atom” of meaning, distinct from other such atoms, but like word forms themselves. The meaning of a proposition or a sentence is just composed of a string of such atoms of meaning, as a freight train is composed of a string of cars. I like to oppose this with a different metaphor, dropping pebbles into a pond, one after the other. Each pebble sends ripples across the surface of the pond. The succession of ripples from each pebble interferes with the others. That growing interference pattern is the meaning of the string.
And that’s how we need to think about meaning in LLMs, sorta’. Each word consists of a token and the vector encoding its meaning as an embedding in a high-dimensional space – roughly 12K, I believe, for GPT-3. Given two words, we can compare their vectors, dimension by dimension. Where the words are closely related, they should have similar, perhaps even identical, values along some dimensions. Where the words are highly dissimilar, they will share few or no values.
I find the idea of entanglement useful here. Some words have meanings that are closely entangled, while others do not. We can think of an embedding model as an entanglement matrix. This matrix shows how the meaning of any one word is a function of it position in the matrix. When you present a prompt to, say, ChatGPT, it generates an output by calculating the entanglement of the prompt with the language model.
Contrast this way of thinking with the standard, “Generate the net token, and the one after that, and so on.” The standard way of thinking has you thinking in terms of atomic units, tokens, and obscures the nature of the process, making it seem deeply obscure, even magical. Just what’s going on when the underlying model is “calculating the entanglement” of the prompt with the model is not at all obvious – I can’t tell you what it is – but it has a different feel. Similarly, training by “predict the next word” is really a way of calculating the entanglement of the text with the whole model, for the whole text (in the context window) is involved in the calculation, not just the leading word.
More later.
No comments:
Post a Comment