Rethinking Hallucinations: Correctness, Consistency, and Prompt Multiplicity
Abstract
Large language models (LLMs) are known to "hallucinate" by generating false or misleading outputs. Hallucinations pose various harms, from erosion of trust to widespread misinformation. Existing hallucination evaluation, however, focuses only on "correctness" and often overlooks "consistency", necessary to distinguish and address these harms. To bridge this gap, we introduce _prompt multiplicity_, a framework for quantifying consistency through prompt sensitivity. Our analysis reveals significant multiplicity (over 50% inconsistency in benchmarks like Med-HALT), suggesting that hallucination-related harms have been severely underestimated. Furthermore, we study the role of consistency in hallucination detection and mitigation. We find that: (a) detection techniques capture consistency, not correctness, and (b) mitigation techniques like RAG can introduce additional inconsistencies. By integrating prompt multiplicity into hallucination evaluation, we provide an improved framework of potential harms and uncover critical limitations in current detection and mitigation strategies.
Cite
Text
Ganesh et al. "Rethinking Hallucinations: Correctness, Consistency, and Prompt Multiplicity." ICLR 2025 Workshops: BuildingTrust, 2025.Markdown
[Ganesh et al. "Rethinking Hallucinations: Correctness, Consistency, and Prompt Multiplicity." ICLR 2025 Workshops: BuildingTrust, 2025.](https://mlanthology.org/iclrw/2025/ganesh2025iclrw-rethinking/)BibTeX
@inproceedings{ganesh2025iclrw-rethinking,
title = {{Rethinking Hallucinations: Correctness, Consistency, and Prompt Multiplicity}},
author = {Ganesh, Prakhar and Shokri, Reza and Farnadi, Golnoosh},
booktitle = {ICLR 2025 Workshops: BuildingTrust},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/ganesh2025iclrw-rethinking/}
}