Beyond the Factual vs. Hallucinatory Dichotomy: A Refined Taxonomy for LLM Medical Response Categorization
Abstract
Large Language Models (LLMs) are increasingly used in medicine, but the traditional factual/hallucinatory distinction fails to reflect the evolving nature of medical knowledge. This paper critiques that binary and proposes a refined, three-tiered classification: (1) Currently Verifiable Responses, (2) Tentatively Examinable Responses, and (3) Predictive Responses. This framework introduces a veridicality gradient and emphasizes temporal verifiability, enabling more accurate evaluation, reducing clinical risk, and supporting adaptive model calibration. Ultimately, it promotes the development of safer and more epistemically responsible medical AI systems.
Cite
Text
Afroogh et al. "Beyond the Factual vs. Hallucinatory Dichotomy: A Refined Taxonomy for LLM Medical Response Categorization." ICLR 2025 Workshops: MLGenX, 2025.Markdown
[Afroogh et al. "Beyond the Factual vs. Hallucinatory Dichotomy: A Refined Taxonomy for LLM Medical Response Categorization." ICLR 2025 Workshops: MLGenX, 2025.](https://mlanthology.org/iclrw/2025/afroogh2025iclrw-beyond/)BibTeX
@inproceedings{afroogh2025iclrw-beyond,
title = {{Beyond the Factual vs. Hallucinatory Dichotomy: A Refined Taxonomy for LLM Medical Response Categorization}},
author = {Afroogh, Saleh and Poreesmaiel, Yasser and Jiao, Junfeng},
booktitle = {ICLR 2025 Workshops: MLGenX},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/afroogh2025iclrw-beyond/}
}