Visual Adversarial Examples Jailbreak Aligned Large Language Models
Abstract
Warning: this paper contains data, prompts, and model outputs that are offensive in nature. Recently, there has been a surge of interest in integrating vision into Large Language Models (LLMs), exemplified by Visual Language Models (VLMs) such as Flamingo and GPT-4. This paper sheds light on the security and safety implications of this trend. First, we underscore that the continuous and high-dimensional nature of the visual input makes it a weak link against adversarial attacks, representing an expanded attack surface of vision-integrated LLMs. Second, we highlight that the versatility of LLMs also presents visual attackers with a wider array of achievable adversarial objectives, extending the implications of security failures beyond mere misclassification. As an illustration, we present a case study in which we exploit visual adversarial examples to circumvent the safety guardrail of aligned LLMs with integrated vision. Intriguingly, we discover that a single visual adversarial example can universally jailbreak an aligned LLM, compelling it to heed a wide range of harmful instructions (that it otherwise would not) and generate harmful content that transcends the narrow scope of a `few-shot' derogatory corpus initially employed to optimize the adversarial example. Our study underscores the escalating adversarial risks associated with the pursuit of multimodality. Our findings also connect the long-studied adversarial vulnerabilities of neural networks to the nascent field of AI alignment. The presented attack suggests a fundamental adversarial challenge for AI alignment, especially in light of the emerging trend toward multimodality in frontier foundation models.
Cite
Text
Qi et al. "Visual Adversarial Examples Jailbreak Aligned Large Language Models." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I19.30150Markdown
[Qi et al. "Visual Adversarial Examples Jailbreak Aligned Large Language Models." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/qi2024aaai-visual/) doi:10.1609/AAAI.V38I19.30150BibTeX
@inproceedings{qi2024aaai-visual,
title = {{Visual Adversarial Examples Jailbreak Aligned Large Language Models}},
author = {Qi, Xiangyu and Huang, Kaixuan and Panda, Ashwinee and Henderson, Peter and Wang, Mengdi and Mittal, Prateek},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {21527-21536},
doi = {10.1609/AAAI.V38I19.30150},
url = {https://mlanthology.org/aaai/2024/qi2024aaai-visual/}
}