Geometry of Decision Making in Language Models

Abstract

Large Language Models (LLMs) show strong generalization across diverse tasks, yet the internal decision-making processes behind their predictions remain opaque. In this work, we study the geometry of hidden representations in LLMs through the lens of intrinsic dimension (ID), focusing specifically on decision-making dynamics in a multiple-choice question answering (MCQA) setting. We perform a large-scale study, with 28 open-weight transformer models and estimate ID across layers using multiple estimators, while also quantifying per-layer performance on MCQA tasks. Our findings reveal a consistent ID pattern across models: early layers operate on low-dimensional manifolds, middle layers expand this space, and later layers compress it again, converging to decision-relevant representations. Together, these results suggest LLMs implicitly learn to project linguistic inputs onto structured, low-dimensional manifolds aligned with task-specific decisions, providing new geometric insights into how generalization and reasoning emerge in language models.

Cite

Text

Joshi et al. "Geometry of Decision Making in Language Models." Advances in Neural Information Processing Systems, 2025.

Markdown

[Joshi et al. "Geometry of Decision Making in Language Models." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/joshi2025neurips-geometry/)

BibTeX

@inproceedings{joshi2025neurips-geometry,
  title     = {{Geometry of Decision Making in Language Models}},
  author    = {Joshi, Abhinav and Bhatt, Divyanshu and Modi, Ashutosh},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/joshi2025neurips-geometry/}
}