Adversarial Examples with Specular Highlights
Abstract
We introduce specular highlight as a natural adversary and examine how deep neural network classifiers can get affected by them, resulting in a reduction in their prediction performance. We also curate two separate datasets, ImageNet-AH with artificially generated Gaussian specular highlights and ImageNet-PT by flashing natural specular highlights on printed images, both demonstrating significant degradations in the performance of the classifiers. We note around 20% drop in the model prediction accuracy with artificial specular highlights and around 35% accuracy drop in torch-highlighted printed images. These drops indeed question the robustness and reliability of modern-day image classifiers. We also find that finetuning these classifiers with specular images does not improve the prediction performance enough. To understand the reason, we finally do an activation mapping analysis and examine the network attention areas in images with and without highlights. We find that specular highlights shift the attention of models which makes fine-tuning ineffective, eventually broadly leading to performance drops.
Cite
Text
Vats and Jerripothula. "Adversarial Examples with Specular Highlights." IEEE/CVF International Conference on Computer Vision Workshops, 2023. doi:10.1109/ICCVW60793.2023.00388Markdown
[Vats and Jerripothula. "Adversarial Examples with Specular Highlights." IEEE/CVF International Conference on Computer Vision Workshops, 2023.](https://mlanthology.org/iccvw/2023/vats2023iccvw-adversarial/) doi:10.1109/ICCVW60793.2023.00388BibTeX
@inproceedings{vats2023iccvw-adversarial,
title = {{Adversarial Examples with Specular Highlights}},
author = {Vats, Vanshika and Jerripothula, Koteswar Rao},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2023},
pages = {3604-3613},
doi = {10.1109/ICCVW60793.2023.00388},
url = {https://mlanthology.org/iccvw/2023/vats2023iccvw-adversarial/}
}