Interpretability-Guided Test-Time Adversarial Defense
Abstract
We propose a novel and low-cost test-time adversarial defense by devising interpretability-guided neuron importance ranking methods to identify neurons important to the output classes. Our method is a training-free approach that can significantly improve the robustness-accuracy tradeoff while incurring minimal computational overhead. While being among the most efficient test-time defenses (4× faster), our method is also robust to a wide range of black-box, white-box, and adaptive attacks that break previous test-time defenses. We demonstrate the efficacy of our method for CIFAR10, CIFAR100, and ImageNet-1k on the standard RobustBench benchmark (with average gains of 2.6%, 4.9%, and 2.8% respectively). We also show improvements (average 1.5%) over the state-of-the-art test-time defenses even under strong adaptive attacks.
Cite
Text
Kulkarni and Weng. "Interpretability-Guided Test-Time Adversarial Defense." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72913-3_26Markdown
[Kulkarni and Weng. "Interpretability-Guided Test-Time Adversarial Defense." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/kulkarni2024eccv-interpretabilityguided/) doi:10.1007/978-3-031-72913-3_26BibTeX
@inproceedings{kulkarni2024eccv-interpretabilityguided,
title = {{Interpretability-Guided Test-Time Adversarial Defense}},
author = {Kulkarni, Akshay and Weng, Tsui-Wei},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72913-3_26},
url = {https://mlanthology.org/eccv/2024/kulkarni2024eccv-interpretabilityguided/}
}