ImageNet Pre-Training Also Transfers Non-Robustness
Abstract
ImageNet pre-training has enabled state-of-the-art results on many tasks. In spite of its recognized contribution to generalization, we observed in this study that ImageNet pre-training also transfers adversarial non-robustness from pre-trained model into fine-tuned model in the downstream classification tasks. We first conducted experiments on various datasets and network backbones to uncover the adversarial non-robustness in fine-tuned model. Further analysis was conducted on examining the learned knowledge of fine-tuned model and standard model, and revealed that the reason leading to the non-robustness is the non-robust features transferred from ImageNet pre-trained model. Finally, we analyzed the preference for feature learning of the pre-trained model, explored the factors influencing robustness, and introduced a simple robust ImageNet pre-training solution. Our code is available at https://github.com/jiamingzhang94/ImageNet-Pretraining-transfers-non-robustness.
Cite
Text
Zhang et al. "ImageNet Pre-Training Also Transfers Non-Robustness." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I3.25452Markdown
[Zhang et al. "ImageNet Pre-Training Also Transfers Non-Robustness." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/zhang2023aaai-imagenet/) doi:10.1609/AAAI.V37I3.25452BibTeX
@inproceedings{zhang2023aaai-imagenet,
title = {{ImageNet Pre-Training Also Transfers Non-Robustness}},
author = {Zhang, Jiaming and Sang, Jitao and Yi, Qi and Yang, Yunfan and Dong, Huiwen and Yu, Jian},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {3436-3444},
doi = {10.1609/AAAI.V37I3.25452},
url = {https://mlanthology.org/aaai/2023/zhang2023aaai-imagenet/}
}