Endowing Visual Reprogramming with Adversarial Robustness

Abstract

Visual reprogramming (VR) leverages well-developed pre-trained models (e.g., a pre-trained classifier on ImageNet) to tackle target tasks (e.g., a traffic sign recognition task), without the need for training from scratch. Despite the effectiveness of previous VR methods, all of them did not consider the adversarial robustness of reprogrammed models against adversarial attacks, which could lead to unpredictable problems in safety-crucial target tasks. In this paper, we empirically find that reprogramming pre-trained models with adversarial robustness and incorporating adversarial samples from the target task during reprogramming can both improve the adversarial robustness of reprogrammed models. Furthermore, we propose a theoretically guaranteed adversarial robustness risk upper bound for VR, which validates our empirical findings and could provide a theoretical foundation for future research. Extensive experiments demonstrate that by adopting the strategies revealed in our empirical findings, the adversarial robustness of reprogrammed models can be enhanced.

Cite

Text

Zhou et al. "Endowing Visual Reprogramming with Adversarial Robustness." International Conference on Learning Representations, 2025.

Markdown

[Zhou et al. "Endowing Visual Reprogramming with Adversarial Robustness." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhou2025iclr-endowing/)

BibTeX

@inproceedings{zhou2025iclr-endowing,
  title     = {{Endowing Visual Reprogramming with Adversarial Robustness}},
  author    = {Zhou, Shengjie and Cheng, Xin and Xu, Haiyang and Yan, Ming and Xiang, Tao and Liu, Feng and Feng, Lei},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/zhou2025iclr-endowing/}
}