G-UAP: Generic Universal Adversarial Perturbation That Fools RPN-Based Detectors

Abstract

Adversarial perturbation constructions have been demonstrated for object detection, but these are image-specific perturbations. Recent works have shown the existence of image-agnostic perturbations called universal adversarial perturbation (UAP) that can fool the classifiers over a set of natural images. In this paper, we extend this kind perturbation to attack deep proposal-based object detectors. We present a novel and effective approach called G-UAP to craft universal adversarial perturbations, which can explicitly degrade the detection accuracy of a detector on a wide range of image samples. Our method directly misleads the Region Proposal Network (RPN) of the detectors into mistaking foreground (objects) for background without specifying an adversarial label for each target (RPN’s proposal), and even without considering that how many objects and object-like targets are in the image. The experimental results over three state-of-the-art detectors and two datasets demonstrate the effectiveness of the proposed method and transferability of the universal perturbations.

Cite

Text

Wu et al. "G-UAP: Generic Universal Adversarial Perturbation That Fools RPN-Based Detectors." Proceedings of The Eleventh Asian Conference on Machine Learning, 2019.

Markdown

[Wu et al. "G-UAP: Generic Universal Adversarial Perturbation That Fools RPN-Based Detectors." Proceedings of The Eleventh Asian Conference on Machine Learning, 2019.](https://mlanthology.org/acml/2019/wu2019acml-guap/)

BibTeX

@inproceedings{wu2019acml-guap,
  title     = {{G-UAP: Generic Universal Adversarial Perturbation That Fools RPN-Based Detectors}},
  author    = {Wu, Xing and Huang, Lifeng and Gao, Chengying},
  booktitle = {Proceedings of The Eleventh Asian Conference on Machine Learning},
  year      = {2019},
  pages     = {1204-1217},
  volume    = {101},
  url       = {https://mlanthology.org/acml/2019/wu2019acml-guap/}
}