TAPT: Test-Time Adversarial Prompt Tuning for Robust Inference in Vision-Language Models

Abstract

Large pre-trained Vision-Language Models (VLMs) such as CLIP have demonstrated excellent zero-shot generalizability across various downstream tasks. However, recent studies have shown that the inference performance of CLIP can be greatly degraded by small adversarial perturbations, especially its visual modality, posing significant safety threats. To mitigate this vulnerability, in this paper, we propose a novel defense method called Test-Time Adversarial Prompt Tuning (TAPT) to enhance the inference robustness of CLIP against visual adversarial attacks. TAPT is a test-time defense method that learns defensive bimodal (textual and visual) prompts to robustify the inference process of CLIP. Specifically, it is an unsupervised method that optimizes the defensive prompts for each test sample by minimizing a multi-view entropy and aligning adversarial-clean distributions. We evaluate the effectiveness of TAPT on 11 benchmark datasets, including ImageNet and 10 other zero-shot datasets, demonstrating that it enhances the zero-shot adversarial robustness of the original CLIP by at least 48.9% against AutoAttack (AA), while largely maintaining performance on clean examples. Moreover, TAPT outperforms existing adversarial prompt tuning methods across various backbones, achieving an average robustness improvement of at least 36.6%. Code is available at https://github.com/xinwong/TAPT.

Cite

Text

Wang et al. "TAPT: Test-Time Adversarial Prompt Tuning for Robust Inference in Vision-Language Models." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.01854

Markdown

[Wang et al. "TAPT: Test-Time Adversarial Prompt Tuning for Robust Inference in Vision-Language Models." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/wang2025cvpr-tapt/) doi:10.1109/CVPR52734.2025.01854

BibTeX

@inproceedings{wang2025cvpr-tapt,
  title     = {{TAPT: Test-Time Adversarial Prompt Tuning for Robust Inference in Vision-Language Models}},
  author    = {Wang, Xin and Chen, Kai and Zhang, Jiaming and Chen, Jingjing and Ma, Xingjun},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {19910-19920},
  doi       = {10.1109/CVPR52734.2025.01854},
  url       = {https://mlanthology.org/cvpr/2025/wang2025cvpr-tapt/}
}