Shared Imagination: LLMs Hallucinate Alike
Abstract
Despite the recent proliferation of large language models (LLMs), their training recipes -- model architecture, pre-training data and optimization algorithm -- are often very similar. This naturally raises the question of the similarity among the resulting models. In this paper, we propose a novel setting, imaginary question answering (IQA), to better understand model similarity. In IQA, we ask one model to generate purely imaginary questions (e.g., on completely made-up concepts in physics) and prompt another model to answer. Surprisingly, despite the total fictionality of these questions, all models can answer each other's questions with remarkable consistency, suggesting a "shared imagination space" in which these models operate during such hallucinations. We conduct a series of investigations into this phenomenon and discuss the implications of such model homogeneity on hallucination detection and computational creativity.
Cite
Text
Zhou et al. "Shared Imagination: LLMs Hallucinate Alike." Transactions on Machine Learning Research, 2025.Markdown
[Zhou et al. "Shared Imagination: LLMs Hallucinate Alike." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/zhou2025tmlr-shared/)BibTeX
@article{zhou2025tmlr-shared,
title = {{Shared Imagination: LLMs Hallucinate Alike}},
author = {Zhou, Yilun and Xiong, Caiming and Savarese, Silvio and Wu, Chien-Sheng},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/zhou2025tmlr-shared/}
}