AdvDrop: Adversarial Attack to DNNs by Dropping Information

Abstract

Human can easily recognize visual objects with lost information: even losing most details with only contour reserved, e.g. cartoon. However, in terms of visual perception of Deep Neural Networks (DNNs), the ability for recognizing abstract objects (visual objects with lost information) is still a challenge. In this work, we investigate this issue from an adversarial viewpoint: will the performance of DNNs decrease even for the images only losing a little information? Towards this end, we propose a novel adversarial attack, named AdvDrop, which crafts adversarial examples by dropping existing information of images. Previously, most adversarial attacks add extra disturbing information on clean images explicitly. Opposite to previous works, our proposed work explores the adversarial robustness of DNN models in a novel perspective by dropping imperceptible details to craft adversarial examples. We demonstrate the effectiveness of AdvDrop by extensive experiments, and show that this new type of adversarial examples is more difficult to be defended by current defense systems.

Cite

Text

Duan et al. "AdvDrop: Adversarial Attack to DNNs by Dropping Information." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00741

Markdown

[Duan et al. "AdvDrop: Adversarial Attack to DNNs by Dropping Information." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/duan2021iccv-advdrop/) doi:10.1109/ICCV48922.2021.00741

BibTeX

@inproceedings{duan2021iccv-advdrop,
  title     = {{AdvDrop: Adversarial Attack to DNNs by Dropping Information}},
  author    = {Duan, Ranjie and Chen, Yuefeng and Niu, Dantong and Yang, Yun and Qin, A. K. and He, Yuan},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {7506-7515},
  doi       = {10.1109/ICCV48922.2021.00741},
  url       = {https://mlanthology.org/iccv/2021/duan2021iccv-advdrop/}
}