An Image Is Worth 1000 Lies: Transferability of Adversarial Images Across Prompts on Vision-Language Models
Abstract
Different from traditional task-specific vision models, recent large VLMs can readily adapt to different vision tasks by simply using different textual instructions, i.e., prompts. However, a well-known concern about traditional task-specific vision models is that they can be misled by imperceptible adversarial perturbations. Furthermore, the concern is exacerbated by the phenomenon that the same adversarial perturbations can fool different task-specific models. Given that VLMs rely on prompts to adapt to different tasks, an intriguing question emerges: Can a single adversarial image mislead all predictions of VLMs when a thousand different prompts are given? This question essentially introduces a novel perspective on adversarial transferability: cross-prompt adversarial transferability. In this work, we propose the Cross-Prompt Attack (CroPA). This proposed method updates the visual adversarial perturbation with learnable textual prompts, which are designed to counteract the misleading effects of the adversarial image. By doing this, CroPA significantly improves the transferability of adversarial examples across prompts. Extensive experiments are conducted to verify the strong cross-prompt adversarial transferability of CroPA with prevalent VLMs including Flamingo, BLIP-2, and InstructBLIP in various different tasks.
Cite
Text
Luo et al. "An Image Is Worth 1000 Lies: Transferability of Adversarial Images Across Prompts on Vision-Language Models." International Conference on Learning Representations, 2024.Markdown
[Luo et al. "An Image Is Worth 1000 Lies: Transferability of Adversarial Images Across Prompts on Vision-Language Models." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/luo2024iclr-image/)BibTeX
@inproceedings{luo2024iclr-image,
title = {{An Image Is Worth 1000 Lies: Transferability of Adversarial Images Across Prompts on Vision-Language Models}},
author = {Luo, Haochen and Gu, Jindong and Liu, Fengyuan and Torr, Philip},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/luo2024iclr-image/}
}