Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Abstract
Recent studies have shown that DNNs can be compromised by backdoor attacks crafted at training time. A backdoor attack installs a backdoor into the victim model by injecting a backdoor pattern into a small set of training examples. The victim model behaves normally on clean test data, yet consistently predicts a specific (likely incorrect) target class whenever the backdoor pattern is present in an input example. While existing backdoor attacks are effective, they are not stealthy. The modifications made on training data or labels are often suspicious and can be easily detected by simple data filtering or human inspection. In this paper, we present a new type of backdoor attack inspired by an important natural phenomenon: reflection. Using mathematical modeling of physical reflection models, we propose reflection backdoor (Refool) to plant reflections as backdoor into a victim model. We demonstrate on 3 computer vision tasks and 5 datasets that, Refool can attack state-of-the-art DNNs with high success rate, and is more resistant to state-of-the-art backdoor defenses than existing attacks.
Cite
Text
Liu et al. "Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58607-2_11Markdown
[Liu et al. "Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/liu2020eccv-reflection/) doi:10.1007/978-3-030-58607-2_11BibTeX
@inproceedings{liu2020eccv-reflection,
title = {{Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks}},
author = {Liu, Yunfei and Ma, Xingjun and Bailey, James and Lu, Feng},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58607-2_11},
url = {https://mlanthology.org/eccv/2020/liu2020eccv-reflection/}
}