The Consistency Hypothesis in Uncertainty Quantification for Large Language Models
Abstract
Estimating the confidence of large language model (LLM) outputs is essential for real-world applications requiring high user trust. Black-box uncertainty quantification (UQ) methods, relying solely on model API access, have gained popularity due to their practical benefits. In this paper, we examine the implicit assumption behind several UQ methods, which use generation consistency as a proxy for confidence-an idea we formalize as the consistency hypothesis. We introduce three mathematical statements with corresponding statistical tests to capture variations of this hypothesis and metrics to evaluate LLM output conformity across tasks. Our empirical investigation, spanning 8 benchmark datasets and 3 tasks (question answering, text summarization, and text-to-SQL), highlights the prevalence of the hypothesis under different settings. Among the statements, we highlight the ‘Sim-Any’ hypothesis as the most actionable, and demonstrate how it can be leveraged by proposing data-free black-box UQ methods that aggregate similarities between generations for confidence estimation. These approaches can outperform the closest baselines, showcasing the practical value of the empirically observed consistency hypothesis.
Cite
Text
Xiao et al. "The Consistency Hypothesis in Uncertainty Quantification for Large Language Models." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.Markdown
[Xiao et al. "The Consistency Hypothesis in Uncertainty Quantification for Large Language Models." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.](https://mlanthology.org/uai/2025/xiao2025uai-consistency/)BibTeX
@inproceedings{xiao2025uai-consistency,
title = {{The Consistency Hypothesis in Uncertainty Quantification for Large Language Models}},
author = {Xiao, Quan and Bhattacharjya, Debarun and Ganesan, Balaji and Marinescu, Radu and Mirylenka, Katya and Pham, Nhan H and Glass, Michael and Lee, Junkyu},
booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence},
year = {2025},
pages = {4636-4651},
volume = {286},
url = {https://mlanthology.org/uai/2025/xiao2025uai-consistency/}
}