Emergence of a High-Dimensional Abstraction Phase in Language Transformers

Abstract

A language model (LM) is a mapping from a linguistic context to an output token. However, much remains to be known about this mapping, including how its geometric properties relate to its function. We take a high-level geometric approach to its analysis, observing, across five pre-trained transformer-based LMs and three input datasets, a distinct phase characterized by high intrinsic dimensionality. During this phase, representations (1) correspond to the first full linguistic abstraction of the input; (2) are the first to viably transfer to downstream tasks; (3) predict each other across different LMs. Moreover, we find that an earlier onset of the phase strongly predicts better language modelling performance. In short, our results suggest that a central high-dimensionality phase underlies core linguistic processing in many common LM architectures.

Cite

Text

Cheng et al. "Emergence of a High-Dimensional Abstraction Phase in Language Transformers." International Conference on Learning Representations, 2025.

Markdown

[Cheng et al. "Emergence of a High-Dimensional Abstraction Phase in Language Transformers." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/cheng2025iclr-emergence/)

BibTeX

@inproceedings{cheng2025iclr-emergence,
  title     = {{Emergence of a High-Dimensional Abstraction Phase in Language Transformers}},
  author    = {Cheng, Emily and Doimo, Diego and Kervadec, Corentin and Macocco, Iuri and Yu, Lei and Laio, Alessandro and Baroni, Marco},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/cheng2025iclr-emergence/}
}