LaVAN: Localized and Visible Adversarial Noise

Abstract

Most works on adversarial examples for deep-learning based image classifiers use noise that, while small, covers the entire image. We explore the case where the noise is allowed to be visible but confined to a small, localized patch of the image, without covering any of the main object(s) in the image. We show that it is possible to generate localized adversarial noises that cover only 2% of the pixels in the image, none of them over the main object, and that are transferable across images and locations, and successfully fool a state-of-the-art Inception v3 model with very high success rates.

Cite

Text

Karmon et al. "LaVAN: Localized and Visible Adversarial Noise." International Conference on Machine Learning, 2018.

Markdown

[Karmon et al. "LaVAN: Localized and Visible Adversarial Noise." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/karmon2018icml-lavan/)

BibTeX

@inproceedings{karmon2018icml-lavan,
  title     = {{LaVAN: Localized and Visible Adversarial Noise}},
  author    = {Karmon, Danny and Zoran, Daniel and Goldberg, Yoav},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {2507-2515},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/karmon2018icml-lavan/}
}