Pages in this blog

Tuesday, May 21, 2024

The Platonic Representation Hypothesis

Minyoung Huh, Brian Cheung, Tongzhou Wang, Phillip Isola, The Platonic Representation Hypothesis, arXiv:2405.07987 [cs.LG]

Abstract: We argue that representations in AI models, particularly deep networks, are converging. First, we survey many examples of convergence in the literature: over time and across multiple domains, the ways by which different neural networks represent data are becoming more aligned. Next, we demonstrate convergence across data modalities: as vision models and language models get larger, they measure distance between datapoints in a more and more alike way. We hypothesize that this convergence is driving toward a shared statistical model of reality, akin to Plato's concept of an ideal reality. We term such a representation the platonic representation and discuss several possible selective pressures toward it. Finally, we discuss the implications of these trends, their limitations, and counterexamples to our analysis.

You can find a summary at The Gradient #75. From the summary:

What excites me the most about this representational convergence is the ability to share and use data from different modalities for training and inference. It also suggests that multimodal models are better than single-modal ones, given they are grounded in additional modalities and should represent the world in a way that's closer to what the world really is. On the other hand, it is not clear whether a 16% alignment (see one of the figures above) between a set of language and vision models is significant enough to qualify as "convergence." I'm also questioning whether this platonic representation, assuming it does exist, is the endpoint we should pursue, as opposed to what we want the world to be. But that's a whole other ethical debate.

– Jaymee

I am just waiting for the philosophy takes on this paper. But, before we come up with another project and set about trying to determine whether moral realism is a thing based on what AI models seem to be doing, we should probably take stock of a few aspects of what’s going on in the paper. I think a few interesting callouts are section 2.4, where the authors draw on the related point that neural networks seem to show substantial alignment with biological representations in the brain. I also think the three hypotheses presented in section 3 are useful intuition pumps: the Multitask Scaling Hypothesis says if we consider competency at some number of tasks, N, we should expect fewer representations to be competent for all N tasks as N grows larger. You might also expect that models with larger hypothesis spaces to be more likely to find an optimal representation, if one exists in function space—the authors call this the Capacity Hypothesis. Finally, the Simplicity Bias Hypothesis says deep networks are biased towards finding simple fits to the data, and larger models will have a stronger bias.

I think, if you buy what’s being said here, the “our current paradigm is not very efficient” point becomes something like: fairly general architectures without strong inductive biases towards certain sorts of representations (beyond the simplicity bias), scaled up enough and trained to solve a general enough task(s), will have large enough hypothesis spaces that they’ll eventually be pressured to find optimal representations for their data. It is worth noting that datasets and tasks are structured by what we take to be useful and want models to do, and so while I think it might be perfectly fine to posit that there’s a representation (or representations) most useful for those things, calling it a “shared representation of reality” feels a bit grandiose (maybe I’m just being annoying. But, to be fair, you could be a lot more annoying about this paper if you really wanted. I’ll leave doing that as a take-home exercise—imagine you’re Reviewer 2 and have at it). If you’ve heard of projectivism… it seems reasonable to think that “representation of reality” might be projectivism.

All that said, this is a thoughtful and interesting paper. I like the counterexamples and limitations section the authors include at the end, and I think you should read it in full.

Also, for other takes on “universal” representations / representations useful for transfer learning, I had a conversation with Hugo Larochelle some time ago that went into a bunch of his work on this—I expect you’d find a number of his papers on the subject interesting if you liked this one.

—Daniel

No comments:

Post a Comment