Understanding and Rectifying Safety Perception Distortion in VLMs
Abstract
Recent studies reveal that vision-language models (VLMs) become more susceptible to harmful requests and jailbreak attacks after integrating the vision modality, exhibiting greater vulnerability than their text-only LLM backbones. To uncover the root cause of this phenomenon, we conduct an in-depth analysis and identify a key issue: multimodal inputs introduce an modality-induced activation shift toward a “safer” direction compared to their text-only counterparts, leading VLMs to systematically overestimate the safety of harmful inputs. We refer to this issue as safety perception distortion. To mitigate such distortion, we propose Activation Shift Disentanglement and Calibration (ShiftDC), a training-free method that decomposes and calibrates the modality-induced activation shift to reduce its impact on safety. By isolating and removing the safety-relevant component, ShiftDC restores the inherent safety alignment of the LLM backbone while preserving the vision-language capabilities of VLMs. Experiments demonstrate that ShiftDC significantly enhances safety alignment without impairing model utility.
Cite
Text
Zou et al. "Understanding and Rectifying Safety Perception Distortion in VLMs." Advances in Neural Information Processing Systems, 2025.Markdown
[Zou et al. "Understanding and Rectifying Safety Perception Distortion in VLMs." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zou2025neurips-understanding/)BibTeX
@inproceedings{zou2025neurips-understanding,
title = {{Understanding and Rectifying Safety Perception Distortion in VLMs}},
author = {Zou, Xiaohan and Kang, Jian and Kesidis, George and Lin, Lu},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/zou2025neurips-understanding/}
}