On Visible Adversarial Perturbations & Digital Watermarking
Abstract
Given a machine learning model, adversarial perturbations transform images such that the model's output is classified as an attacker chosen class. Most research in this area has focused on adversarial perturbations that are imperceptible to the human eye. However, recent work has considered attacks that are perceptible but localized to a small region of the image. Under this threat model, we discuss both defenses that remove such adversarial perturbations, and attacks that can bypass these defenses.
Cite
Text
Hayes. "On Visible Adversarial Perturbations & Digital Watermarking." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018. doi:10.1109/CVPRW.2018.00210Markdown
[Hayes. "On Visible Adversarial Perturbations & Digital Watermarking." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018.](https://mlanthology.org/cvprw/2018/hayes2018cvprw-visible/) doi:10.1109/CVPRW.2018.00210BibTeX
@inproceedings{hayes2018cvprw-visible,
title = {{On Visible Adversarial Perturbations & Digital Watermarking}},
author = {Hayes, Jamie},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2018},
pages = {1597-1604},
doi = {10.1109/CVPRW.2018.00210},
url = {https://mlanthology.org/cvprw/2018/hayes2018cvprw-visible/}
}