HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding

Abstract

While large vision-language models (LVLMs) have demonstrated impressive capabilities in interpreting multi-modal contexts, they invariably suffer from object hallucinations (OH). We introduce HALC, a novel decoding algorithm designed to mitigate OH in LVLMs. HALC leverages distinct fine-grained optimal visual information in vision-language tasks and operates on both local and global contexts simultaneously. Specifically, HALC integrates a robust auto-focal grounding mechanism (locally) to correct hallucinated tokens on the fly, and a specialized beam search algorithm (globally) to significantly reduce OH while preserving text generation quality. Additionally, HALC can be integrated into any LVLMs as a plug-and-play module without extra training. Extensive experimental studies demonstrate HALC’s effectiveness in reducing OH, outperforming state-of-the-arts across four benchmarks. Code is released at https://github.com/BillChan226/HALC.

Cite

Text

Chen et al. "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding." International Conference on Machine Learning, 2024.

Markdown

[Chen et al. "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/chen2024icml-halc/)

BibTeX

@inproceedings{chen2024icml-halc,
  title     = {{HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding}},
  author    = {Chen, Zhaorun and Zhao, Zhuokai and Luo, Hongyin and Yao, Huaxiu and Li, Bo and Zhou, Jiawei},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {7824-7846},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/chen2024icml-halc/}
}