Jon Evans, Language is out Latent Space, Gradient Ascendant, March 14, 2023.
An analogy for how LLMs work:
Another analogy, as two combined can be more illuminating than one: consider snooker, the pool-like game won by sinking balls of varying value in the best possible order. Imagine a snooker table the size of Central Park, occupied by thousands of pockets and millions of numbered balls (the numbers 1 through 32000, repeated.) Now imagine that the rules of snooker — i.e. which balls are most profitable to sink — change after every shot, depending on where the cue ball is, which balls have previously been sunk, the phase of the moon, etc.
Call that “Jungle Snooker”, borrowing from Eric Jang's idea of Jungle Basketball. The numbers on the balls represent word embeddings; ‘which balls to aim to sink in which order,’ the patterns in latent space. All we have really taught modern LLMs is how to be extremely (stochastically) good at Jungle Snooker, which doesn’t feel that different, qualitatively, from teaching them how to be extremely good at Go or chess. Now, the results, when converted into words, are phenomenal, often eerie —
— but LLMs still don’t “know” that their numbers represent words. In fact they never see words per se; we actually break language into tokens, word fragments basically like phonemes, number those tokens, and feed those numbers in as inputs.
Language as the latent space of culture:
Our latent space, known as language, implicitly encodes an enormous amount of knowledge about the world: concepts, relationships, interactions, constraints. LLM embeddings in turn implicitly include a distilled version of that knowledge. A reason LLMs are so unreasonably effective is that language itself is a machine for understanding, one which, it turns out, includes undocumented and previously unused capabilities — a “capability overhang.”
You might also look at Ted Underwood's paper, Mapping the latent spaces of culture.
Out of the cave:
Invert Plato's cave, and imagine yourself as a puppet master trying to reach out to chained prisoners to whom you can only communicate with shadows. Similarly, right now all we have are machines that we can teach to play Jungle Snooker.
But if we do ever build a machine capable of genuine understanding -- setting aside the question of whether we want to, and noting that people in the field generally think it's “when” not “if” -- it seems likely that language will be our most effective shacklebreaker, just as it was for us. This in turn means today's LLMs are likely to be the crucially important first step down that path.
The question is whether there is any iterative path from Jungle Snooker to Plato's Cave to emergence. Some people think we'll just scale there, and as machines get better at Jungle Snooker, they will naturally develop a facility for abstracting complexity into heuristics, which will breed agency and curiosity and a kind of awareness — or at least behavior indistinguishable from awareness — in the same way that embeddings and latent space spontaneously emerge when you teach LLMs.
Others (including me) suspect that whole new fundamental architectures and/or training techniques will be required. But either way, it seems very likely that language will be key, and that modern LLMs, though they'll seem almost comically crude in even five years, are a historically important technology. Language is our latent space, and that's what gives it its unreasonable power.
Yes, we'll need a whole new architecture. There's more at the link.
No comments:
Post a Comment