Robust Adversarial Objects Against Deep Learning Models

Abstract

Previous work has shown that Deep Neural Networks (DNNs), including those currently in use in many fields, are extremely vulnerable to maliciously crafted inputs, known as adversarial examples. Despite extensive and thorough research of adversarial examples in many areas, adversarial 3D data, such as point clouds, remain comparatively unexplored. The study of adversarial 3D data is crucial considering its impact in real-life, high-stakes scenarios including autonomous driving. In this paper, we propose a novel adversarial attack against PointNet++, a deep neural network that performs classification and segmentation tasks using features learned directly from raw 3D points. In comparison to existing works, our attack generates not only adversarial point clouds, but also robust adversarial objects that in turn generate adversarial point clouds when sampled both in simulation and after construction in real world. We also demonstrate that our objects can bypass existing defense mechanisms designed especially against adversarial 3D data.

Cite

Text

Tsai et al. "Robust Adversarial Objects Against Deep Learning Models." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I01.5443

Markdown

[Tsai et al. "Robust Adversarial Objects Against Deep Learning Models." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/tsai2020aaai-robust/) doi:10.1609/AAAI.V34I01.5443

BibTeX

@inproceedings{tsai2020aaai-robust,
  title     = {{Robust Adversarial Objects Against Deep Learning Models}},
  author    = {Tsai, Tzungyu and Yang, Kaichen and Ho, Tsung-Yi and Jin, Yier},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {954-962},
  doi       = {10.1609/AAAI.V34I01.5443},
  url       = {https://mlanthology.org/aaai/2020/tsai2020aaai-robust/}
}