CAMOU: Learning Physical Vehicle Camouflages to Adversarially Attack Detectors in the Wild

Abstract

In this paper, we conduct an intriguing experimental study about the physical adversarial attack on object detectors in the wild. In particular, we learn a camouflage pattern to hide vehicles from being detected by state-of-the-art convolutional neural network based detectors. Our approach alternates between two threads. In the first, we train a neural approximation function to imitate how a simulator applies a camouflage to vehicles and how a vehicle detector performs given images of the camouflaged vehicles. In the second, we minimize the approximated detection score by searching for the optimal camouflage. Experiments show that the learned camouflage can not only hide a vehicle from the image-based detectors under many test cases but also generalizes to different environments, vehicles, and object detectors.

Cite

Text

Zhang et al. "CAMOU: Learning Physical Vehicle Camouflages to Adversarially Attack Detectors in the Wild." International Conference on Learning Representations, 2019.

Markdown

[Zhang et al. "CAMOU: Learning Physical Vehicle Camouflages to Adversarially Attack Detectors in the Wild." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/zhang2019iclr-camou/)

BibTeX

@inproceedings{zhang2019iclr-camou,
  title     = {{CAMOU: Learning Physical Vehicle Camouflages to Adversarially Attack Detectors in the Wild}},
  author    = {Zhang, Yang and Foroosh, Hassan and David, Philip and Gong, Boqing},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/zhang2019iclr-camou/}
}