Have the VLMs Lost Confidence? a Study of Sycophancy in VLMs

Abstract

In the study of LLMs, sycophancy represents a prevalent hallucination that poses significant challenges to these models. Specifically, LLMs often fail to adhere to original correct responses, instead blindly agreeing with users' opinions, even when those opinions are incorrect or malicious. However, research on sycophancy in visual language models (VLMs) has been scarce. In this work, we extend the exploration of sycophancy from LLMs to VLMs, introducing the MM-SY benchmark to evaluate this phenomenon. We present evaluation results from multiple representative models, addressing the gap in sycophancy research for VLMs. To mitigate sycophancy, we propose a synthetic dataset for training and employ methods based on prompts, supervised fine-tuning, and DPO. Our experiments demonstrate that these methods effectively alleviate sycophancy in VLMs. Additionally, we probe VLMs to assess the semantic impact of sycophancy and analyze the attention distribution of visual tokens. Our findings indicate that the ability to prevent sycophancy is predominantly observed in higher layers of the model. The lack of attention to image knowledge in these higher layers may contribute to sycophancy, and enhancing image attention at high layers proves beneficial in mitigating this issue.

Cite

Text

Li et al. "Have the VLMs Lost Confidence? a Study of Sycophancy in VLMs." International Conference on Learning Representations, 2025.

Markdown

[Li et al. "Have the VLMs Lost Confidence? a Study of Sycophancy in VLMs." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/li2025iclr-vlms/)

BibTeX

@inproceedings{li2025iclr-vlms,
  title     = {{Have the VLMs Lost Confidence? a Study of Sycophancy in VLMs}},
  author    = {Li, Shuo and Ji, Tao and Fan, Xiaoran and Lu, Linsheng and Yang, Leyi and Yang, Yuming and Xi, Zhiheng and Zheng, Rui and Wang, Yuran and Xh.Zhao,  and Gui, Tao and Zhang, Qi and Huang, Xuanjing},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/li2025iclr-vlms/}
}