Robust Recovery of Adversarial Examples
Abstract
Adversarial examples are semantically associated with one class, but modern deep learning architectures fail to see the semantics and associate them to another class. As a result, these examples pose a profound risk to almost every deep learning model. Our proposed architecture can recover such examples effectively with more than 4x the magnitude of attacks than the capability of the state-of-the-art model, despite having lesser parameters than the VGG-13 model. It is composed of a U-Net with the characteristics of self-attention & cross-attention, which enhances the semantics of the image. Our work also encompasses the differences in the results between Noise and Image reconstruction methodologies.
Cite
Text
Bana et al. "Robust Recovery of Adversarial Examples." ICML 2021 Workshops: AML, 2021.Markdown
[Bana et al. "Robust Recovery of Adversarial Examples." ICML 2021 Workshops: AML, 2021.](https://mlanthology.org/icmlw/2021/bana2021icmlw-robust/)BibTeX
@inproceedings{bana2021icmlw-robust,
title = {{Robust Recovery of Adversarial Examples}},
author = {Bana, Tejas and Loya, Jatan and Kulkarni, Siddhant Ravindra},
booktitle = {ICML 2021 Workshops: AML},
year = {2021},
url = {https://mlanthology.org/icmlw/2021/bana2021icmlw-robust/}
}