Pre-Trained Adversarial Perturbations
Abstract
Self-supervised pre-training has drawn increasing attention in recent years due to its superior performance on numerous downstream tasks after fine-tuning. However, it is well-known that deep learning models lack the robustness to adversarial examples, which can also invoke security issues to pre-trained models, despite being less explored. In this paper, we delve into the robustness of pre-trained models by introducing Pre-trained Adversarial Perturbations (PAPs), which are universal perturbations crafted for the pre-trained models to maintain the effectiveness when attacking fine-tuned ones without any knowledge of the downstream tasks. To this end, we propose a Low-Level Layer Lifting Attack (L4A) method to generate effective PAPs by lifting the neuron activations of low-level layers of the pre-trained models. Equipped with an enhanced noise augmentation strategy, L4A is effective at generating more transferable PAPs against the fine-tuned models. Extensive experiments on typical pre-trained vision models and ten downstream tasks demonstrate that our method improves the attack success rate by a large margin compared to the state-of-the-art methods.
Cite
Text
Ban and Dong. "Pre-Trained Adversarial Perturbations." Neural Information Processing Systems, 2022.Markdown
[Ban and Dong. "Pre-Trained Adversarial Perturbations." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/ban2022neurips-pretrained/)BibTeX
@inproceedings{ban2022neurips-pretrained,
title = {{Pre-Trained Adversarial Perturbations}},
author = {Ban, Yuanhao and Dong, Yinpeng},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/ban2022neurips-pretrained/}
}