Towards Adversarially Robust Vision-Language Models: Insights from Design Choices and Prompt Formatting Techniques

Abstract

Vision-Language Models (VLMs) have witnessed a surge in both research and real-world applications. However, as they becoming increasingly prevalent, ensuring their robustness against adversarial attacks is paramount. This work systematically investigates the impact of model design choices on the adversarial robustness of VLMs against image-based attacks. Additionally, we introduce novel, cost-effective approaches to enhance robustness through prompt formatting. By rephrasing questions and suggesting potential adversarial perturbations, we demonstrate substantial improvements in model robustness against strong image-based attacks such as Auto-PGD. Our findings provide important guidelines for developing more robust VLMs, particularly for deployment in safety-critical environments.

Cite

Text

Bhagwatkar et al. "Towards Adversarially Robust Vision-Language Models: Insights from Design Choices and Prompt Formatting Techniques." ICML 2024 Workshops: NextGenAISafety, 2024.

Markdown

[Bhagwatkar et al. "Towards Adversarially Robust Vision-Language Models: Insights from Design Choices and Prompt Formatting Techniques." ICML 2024 Workshops: NextGenAISafety, 2024.](https://mlanthology.org/icmlw/2024/bhagwatkar2024icmlw-adversarially/)

BibTeX

@inproceedings{bhagwatkar2024icmlw-adversarially,
  title     = {{Towards Adversarially Robust Vision-Language Models: Insights from Design Choices and Prompt Formatting Techniques}},
  author    = {Bhagwatkar, Rishika and Nayak, Shravan and Bayat, Reza and Roger, Alexis and Kaplan, Daniel Z and Bashivan, Pouya and Rish, Irina},
  booktitle = {ICML 2024 Workshops: NextGenAISafety},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/bhagwatkar2024icmlw-adversarially/}
}