Mllm Can See? Dynamic Correction Decoding for Hallucination Mitigation
Abstract
Multimodal Large Language Models (MLLMs) frequently exhibit hallucination phenomena, but the underlying reasons remain poorly understood. In this paper, we present an empirical analysis and find that, although MLLMs incorrectly generate the targets in the final output, they are actually able to recognize visual objects in the preceding layers. We speculate that this may be due to the strong knowledge priors of the language model suppressing the visual information, leading to hallucinations. Motivated by this, we propose a novel dynamic correction decoding method for MLLMs (Deco), which adaptively selects the appropriate preceding layers and proportionally integrates knowledge into the final layer to adjust the output logits. Note that Deco is model agnostic and can be seamlessly incorporated with various classic decoding strategies and applied to different MLLMs. We evaluate Deco on widely-used benchmarks, demonstrating that it can reduce hallucination rates by a large margin compared to baselines, highlighting its potential to mitigate hallucinations.
Cite
Text
Wang et al. "Mllm Can See? Dynamic Correction Decoding for Hallucination Mitigation." ICLR 2025 Workshops: FM-Wild, 2025.Markdown
[Wang et al. "Mllm Can See? Dynamic Correction Decoding for Hallucination Mitigation." ICLR 2025 Workshops: FM-Wild, 2025.](https://mlanthology.org/iclrw/2025/wang2025iclrw-mllm/)BibTeX
@inproceedings{wang2025iclrw-mllm,
title = {{Mllm Can See? Dynamic Correction Decoding for Hallucination Mitigation}},
author = {Wang, Chenxi and Chen, Xiang and Zhang, Ningyu and Tian, Bozhong and Xu, Haoming and Deng, Shumin and Chen, Huajun},
booktitle = {ICLR 2025 Workshops: FM-Wild},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/wang2025iclrw-mllm/}
}