Dropping Pixels for Adversarial Robustness

Abstract

Deep neural networks are vulnerable against adversarial examples. In this paper, we propose to train and test the networks with randomly subsampled images with high drop rates. We show that this approach significantly improves robustness against adversarial examples in all cases of bounded L0, L2 and L∞ perturbations, while reducing the standard accuracy by a small value. We argue that subsampling pixels can be thought to provide a set of robust features for the input image and, thus, improves robustness without performing adversarial training.

Cite

Text

Hosseini et al. "Dropping Pixels for Adversarial Robustness." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. doi:10.1109/CVPRW.2019.00017

Markdown

[Hosseini et al. "Dropping Pixels for Adversarial Robustness." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/hosseini2019cvprw-dropping/) doi:10.1109/CVPRW.2019.00017

BibTeX

@inproceedings{hosseini2019cvprw-dropping,
  title     = {{Dropping Pixels for Adversarial Robustness}},
  author    = {Hosseini, Hossein and Kannan, Sreeram and Poovendran, Radha},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2019},
  pages     = {91-97},
  doi       = {10.1109/CVPRW.2019.00017},
  url       = {https://mlanthology.org/cvprw/2019/hosseini2019cvprw-dropping/}
}