Adversarial Examples for Edge Detection: They Exist, and They Transfer
Abstract
Convolutional neural networks have recently advanced the state of the art in many tasks including edge and object boundary detection. However, in this paper, we demonstrate that these edge detectors inherit a troubling property of neural networks: they can be fooled by adversarial examples. We show that adding small perturbations to an image causes HED, a CNN-based edge detection model, to fail to locate edges, to detect nonexistent edges, and even to hallucinate arbitrary configurations of edges. More importantly, we find that these adversarial examples blindly transfer to other CNN-based vision models. In particular, attacks on edge detection result in significant drops in accuracy in models trained to perform unrelated, high-level tasks like image classification and semantic segmentation.
Cite
Text
Cosgrove and Yuille. "Adversarial Examples for Edge Detection: They Exist, and They Transfer." Winter Conference on Applications of Computer Vision, 2020.Markdown
[Cosgrove and Yuille. "Adversarial Examples for Edge Detection: They Exist, and They Transfer." Winter Conference on Applications of Computer Vision, 2020.](https://mlanthology.org/wacv/2020/cosgrove2020wacv-adversarial/)BibTeX
@inproceedings{cosgrove2020wacv-adversarial,
title = {{Adversarial Examples for Edge Detection: They Exist, and They Transfer}},
author = {Cosgrove, Christian and Yuille, Alan},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2020},
url = {https://mlanthology.org/wacv/2020/cosgrove2020wacv-adversarial/}
}