Adversarial Examples for Semantic Image Segmentation

Abstract

Machine learning methods in general and Deep Neural Networks in particular have shown to be vulnerable to adversarial perturbations. So far this phenomenon has mainly been studied in the context of whole-image classification. In this contribution, we analyse how adversarial perturbations can affect the task of semantic segmentation. We show how existing adversarial attackers can be transferred to this task and that it is possible to create imperceptible adversarial perturbations that lead a deep network to misclassify almost all pixels of a chosen class while leaving network prediction nearly unchanged outside this class.

Cite

Text

Fischer et al. "Adversarial Examples for Semantic Image Segmentation." International Conference on Learning Representations, 2017.

Markdown

[Fischer et al. "Adversarial Examples for Semantic Image Segmentation." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/fischer2017iclr-adversarial/)

BibTeX

@inproceedings{fischer2017iclr-adversarial,
  title     = {{Adversarial Examples for Semantic Image Segmentation}},
  author    = {Fischer, Volker and Kumar, Mummadi Chaithanya and Metzen, Jan Hendrik and Brox, Thomas},
  booktitle = {International Conference on Learning Representations},
  year      = {2017},
  url       = {https://mlanthology.org/iclr/2017/fischer2017iclr-adversarial/}
}