Lattice Climber Attack: Adversarial Attacks for Randomized Mixtures of Classifiers
Abstract
Finite mixtures of classifiers (a.k.a. randomized ensembles) have been proposed as a way to improve robustness against adversarial attacks. However, existing attacks have been shown to not suit this kind of classifier. In this paper, we discuss the problem of attacking a mixture in a principled way and introduce two desirable properties of attacks based on a geometrical analysis of the problem (effectiveness and maximality). We then show that existing attacks do not meet both of these properties. Finally, we introduce a new attack called lattice climber attack with theoretical guarantees in the binary linear setting, and demonstrate its performance by conducting experiments on synthetic and real datasets.
Cite
Text
Heredia et al. "Lattice Climber Attack: Adversarial Attacks for Randomized Mixtures of Classifiers." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025. doi:10.1007/978-3-032-06109-6_3Markdown
[Heredia et al. "Lattice Climber Attack: Adversarial Attacks for Randomized Mixtures of Classifiers." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025.](https://mlanthology.org/ecmlpkdd/2025/heredia2025ecmlpkdd-lattice/) doi:10.1007/978-3-032-06109-6_3BibTeX
@inproceedings{heredia2025ecmlpkdd-lattice,
title = {{Lattice Climber Attack: Adversarial Attacks for Randomized Mixtures of Classifiers}},
author = {Heredia, Lucas Gnecco and Négrevergne, Benjamin and Chevaleyre, Yann},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2025},
pages = {39-55},
doi = {10.1007/978-3-032-06109-6_3},
url = {https://mlanthology.org/ecmlpkdd/2025/heredia2025ecmlpkdd-lattice/}
}