Efficient and Transferable Adversarial Examples from Bayesian Neural Networks
Abstract
An established way to improve the transferability of black-box evasion attacks is to craft the adversarial examples on an ensemble-based surrogate to increase diversity. We argue that transferability is fundamentally related to uncertainty. Based on a state-of-the-art Bayesian Deep Learning technique, we propose a new method to efficiently build a surrogate by sampling approximately from the posterior distribution of neural network weights, which represents the belief about the value of each parameter. Our extensive experiments on ImageNet, CIFAR-10 and MNIST show that our approach improves the success rates of four state-of-the-art attacks significantly (up to 83.2 percentage points), in both intra-architecture and inter-architecture transferability. On ImageNet, our approach can reach 94% of success rate while reducing training computations from 11.6 to 2.4 exaflops, compared to an ensemble of independently trained DNNs. Our vanilla surrogate achieves 87.5% of the time higher transferability than three test-time techniques designed for this purpose. Our work demonstrates that the way to train a surrogate has been overlooked, although it is an important element of transfer-based attacks. We are, therefore, the first to review the effectiveness of several training methods in increasing transferability. We provide new directions to better understand the transferability phenomenon and offer a simple but strong baseline for future work.
Cite
Text
Gubri et al. "Efficient and Transferable Adversarial Examples from Bayesian Neural Networks." Uncertainty in Artificial Intelligence, 2022.Markdown
[Gubri et al. "Efficient and Transferable Adversarial Examples from Bayesian Neural Networks." Uncertainty in Artificial Intelligence, 2022.](https://mlanthology.org/uai/2022/gubri2022uai-efficient/)BibTeX
@inproceedings{gubri2022uai-efficient,
title = {{Efficient and Transferable Adversarial Examples from Bayesian Neural Networks}},
author = {Gubri, Martin and Cordy, Maxime and Papadakis, Mike and Le Traon, Yves and Sen, Koushik},
booktitle = {Uncertainty in Artificial Intelligence},
year = {2022},
pages = {738-748},
volume = {180},
url = {https://mlanthology.org/uai/2022/gubri2022uai-efficient/}
}