Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses
Abstract
Convolutional Neural Networks have been shown to be vulnerable to adversarial examples, which are known to locate in subspaces close to where normal data lies but are not naturally occurring and of low probability. In this work, we investigate the potential effect defense techniques have on the geometry of the likelihood landscape - likelihood of the input images under the trained model. We first propose a way to visualize the likelihood landscape leveraging an energy-based model interpretation of discriminative classifiers. Then we introduce a measure to quantify the flatness of the likelihood landscape. We observe that a subset of adversarial defense techniques results in a similar effect of flattening the likelihood landscape. We further explore directly regularizing towards a flat landscape for adversarial robustness.
Cite
Text
Lin et al. "Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-66415-2_3Markdown
[Lin et al. "Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/lin2020eccvw-likelihood/) doi:10.1007/978-3-030-66415-2_3BibTeX
@inproceedings{lin2020eccvw-likelihood,
title = {{Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses}},
author = {Lin, Fu and Mittapalli, Rohit and Chattopadhyay, Prithvijit and Bolya, Daniel and Hoffman, Judy},
booktitle = {European Conference on Computer Vision Workshops},
year = {2020},
pages = {39-54},
doi = {10.1007/978-3-030-66415-2_3},
url = {https://mlanthology.org/eccvw/2020/lin2020eccvw-likelihood/}
}