Universal Adversarial Training
Abstract
Standard adversarial attacks change the predicted class label of a selected image by adding specially tailored small perturbations to its pixels. In contrast, a universal perturbation is an update that can be added to any image in a broad class of images, while still changing the predicted class label. We study the efficient generation of universal adversarial perturbations, and also efficient methods for hardening networks to these attacks. We propose a simple optimization-based universal attack that reduces the top-1 accuracy of various network architectures on ImageNet to less than 20%, while learning the universal perturbation 13× faster than the standard method.To defend against these perturbations, we propose universal adversarial training, which models the problem of robust classifier generation as a two-player min-max game, and produces robust models with only 2× the cost of natural training. We also propose a simultaneous stochastic gradient method that is almost free of extra computation, which allows us to do universal adversarial training on ImageNet.
Cite
Text
Shafahi et al. "Universal Adversarial Training." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I04.6017Markdown
[Shafahi et al. "Universal Adversarial Training." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/shafahi2020aaai-universal/) doi:10.1609/AAAI.V34I04.6017BibTeX
@inproceedings{shafahi2020aaai-universal,
title = {{Universal Adversarial Training}},
author = {Shafahi, Ali and Najibi, Mahyar and Xu, Zheng and Dickerson, John P. and Davis, Larry S. and Goldstein, Tom},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {5636-5643},
doi = {10.1609/AAAI.V34I04.6017},
url = {https://mlanthology.org/aaai/2020/shafahi2020aaai-universal/}
}