Diversity Can Be Transferred: Output Diversification for White- and Black-Box Attacks
Abstract
Adversarial attacks often involve random perturbations of the inputs drawn from uniform or Gaussian distributions, e.g. to initialize optimization-based white-box attacks or generate update directions in black-box attacks. These simple perturbations, however, could be sub-optimal as they are agnostic to the model being attacked. To improve the efficiency of these attacks, we propose Output Diversified Sampling (ODS), a novel sampling strategy that attempts to maximize diversity in the target model's outputs among the generated samples. While ODS is a gradient-based strategy, the diversity offered by ODS is transferable and can be helpful for both white-box and black-box attacks via surrogate models. Empirically, we demonstrate that ODS significantly improves the performance of existing white-box and black-box attacks. In particular, ODS reduces the number of queries needed for state-of-the-art black-box attacks on ImageNet by a factor of two.
Cite
Text
Tashiro et al. "Diversity Can Be Transferred: Output Diversification for White- and Black-Box Attacks." Neural Information Processing Systems, 2020.Markdown
[Tashiro et al. "Diversity Can Be Transferred: Output Diversification for White- and Black-Box Attacks." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/tashiro2020neurips-diversity/)BibTeX
@inproceedings{tashiro2020neurips-diversity,
title = {{Diversity Can Be Transferred: Output Diversification for White- and Black-Box Attacks}},
author = {Tashiro, Yusuke and Song, Yang and Ermon, Stefano},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/tashiro2020neurips-diversity/}
}