Causal Decoding for Hallucination-Resistant Multimodal Large Language Models

Abstract

Multimodal Large Language Models (MLLMs) deliver detailed responses on vision-language tasks, yet remain susceptible to object hallucination (introducing objects not present in the image), undermining reliability in practice. Prior efforts often rely on heuristic penalties, post-hoc correction, or generic decoding tweaks, which do not directly intervene in the mechanisms that trigger object hallucination and thus yield limited gains. To address this challenge, we propose a causal decoding framework that applies targeted causal interventions during generation to curb spurious object mentions. By reshaping the decoding dynamics to attenuate spurious dependencies, our approach reduces false object tokens while maintaining descriptive quality. Across captioning and QA benchmarks, our framework substantially lowers object-hallucination rates and achieves state-of-the-art faithfulness without degrading overall output quality.

Cite

Text

Tan et al. "Causal Decoding for Hallucination-Resistant Multimodal Large Language Models." Transactions on Machine Learning Research, 2026.

Markdown

[Tan et al. "Causal Decoding for Hallucination-Resistant Multimodal Large Language Models." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/tan2026tmlr-causal/)

BibTeX

@article{tan2026tmlr-causal,
  title     = {{Causal Decoding for Hallucination-Resistant Multimodal Large Language Models}},
  author    = {Tan, Shiwei and Wang, Hengyi and Qin, Weiyi and Xu, Qi and Hua, Zhigang and Wang, Hao},
  journal   = {Transactions on Machine Learning Research},
  year      = {2026},
  url       = {https://mlanthology.org/tmlr/2026/tan2026tmlr-causal/}
}