Correlated Errors in Large Language Models
Abstract
Diversity in training data, architecture, and providers is assumed to mitigate homogeneity in LLMs. However, we lack empirical evidence on whether different LLMs differ meaningfully. We conduct a large-scale empirical evaluation on over 350 LLMs overall, using two popular leaderboards and a resume-screening task. We find substantial correlation in model errors—on one leaderboard dataset, models agree 60% of the time when both models err. We identify factors driving model correlation, including shared architectures and providers. Crucially, however, larger and more accurate models have highly correlated errors, even with distinct architectures and providers. Finally, we show the effects of correlation in two downstream tasks: LLM-as-judge evaluation and hiring—the latter reflecting theoretical predictions regarding algorithmic monoculture.
Cite
Text
Kim et al. "Correlated Errors in Large Language Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Kim et al. "Correlated Errors in Large Language Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/kim2025icml-correlated/)BibTeX
@inproceedings{kim2025icml-correlated,
title = {{Correlated Errors in Large Language Models}},
author = {Kim, Elliot Myunghoon and Garg, Avi and Peng, Kenny and Garg, Nikhil},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {30038-30066},
volume = {267},
url = {https://mlanthology.org/icml/2025/kim2025icml-correlated/}
}