Towards Interpretable Adversarial Examples via Sparse Adversarial Attack
Abstract
Sparse attacks are to optimize the magnitude of adversarial perturbations for fooling deep neural networks (DNNs) involving only a few perturbed pixels ( i . e ., under the $l_{0}$ l 0 constraint), suitable for interpreting the vulnerability of DNNs. However, existing solutions fail to yield interpretable adversarial examples due to their poor sparsity. Worse still, they often struggle with heavy computational overhead, poor transferability, and weak attack strength. In this paper, we aim to develop a sparse attack for understanding the vulnerability of DNNs by minimizing the magnitude of initial perturbations under the $l_{0}$ l 0 constraint, to overcome the existing drawbacks while achieving a fast, transferable, and strong attack to DNNs. In particular, a novel and theoretical sound parameterization technique is introduced to approximate the NP-hard $l_{0}$ l 0 optimization problem, making directly optimizing sparse perturbations computationally feasible. Besides, a novel loss function is designed to augment initial perturbations by maximizing the adversary property and minimizing the number of perturbed pixels simultaneously. Extensive experiments are conducted to demonstrate that our approach, with theoretical performance guarantees, outperforms state-of-the-art sparse attacks in terms of computational overhead, transferability, and attack strength, expecting to serve as a benchmark for evaluating the robustness of DNNs. In addition, theoretical and empirical results validate that our approach yields sparser adversarial examples, empowering us to discover two categories of noises, i . e ., “obscuring noise” and “leading noise”, which will help interpret how adversarial perturbation misleads the classifiers into incorrect predictions. Our code is available at https://github.com/fudong03/SparseAttack .
Cite
Text
Lin et al. "Towards Interpretable Adversarial Examples via Sparse Adversarial Attack." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025. doi:10.1007/978-3-032-06109-6_6Markdown
[Lin et al. "Towards Interpretable Adversarial Examples via Sparse Adversarial Attack." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025.](https://mlanthology.org/ecmlpkdd/2025/lin2025ecmlpkdd-interpretable/) doi:10.1007/978-3-032-06109-6_6BibTeX
@inproceedings{lin2025ecmlpkdd-interpretable,
title = {{Towards Interpretable Adversarial Examples via Sparse Adversarial Attack}},
author = {Lin, Fudong and Lou, Jiadong and Wang, Hao and Jalaian, Brian and Yuan, Xu},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2025},
pages = {92-110},
doi = {10.1007/978-3-032-06109-6_6},
url = {https://mlanthology.org/ecmlpkdd/2025/lin2025ecmlpkdd-interpretable/}
}