Position: The Platonic Representation Hypothesis

Abstract

We argue that representations in AI models, particularly deep networks, are converging. First, we survey many examples of convergence in the literature: over time and across multiple domains, the ways by which different neural networks represent data are becoming more aligned. Next, we demonstrate convergence across data modalities: as vision models and language models get larger, they measure distance between datapoints in a more and more alike way. We hypothesize that this convergence is driving toward a shared statistical model of reality, akin to Plato’s concept of an ideal reality. We term such a representation the platonic representation and discuss several possible selective pressures toward it. Finally, we discuss the implications of these trends, their limitations, and counterexamples to our analysis.

Cite

Text

Huh et al. "Position: The Platonic Representation Hypothesis." International Conference on Machine Learning, 2024.

Markdown

[Huh et al. "Position: The Platonic Representation Hypothesis." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/huh2024icml-position/)

BibTeX

@inproceedings{huh2024icml-position,
  title     = {{Position: The Platonic Representation Hypothesis}},
  author    = {Huh, Minyoung and Cheung, Brian and Wang, Tongzhou and Isola, Phillip},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {20617-20642},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/huh2024icml-position/}
}