Detecting Evasion Attacks in Deployed Tree Ensembles
Abstract
Tree ensembles are powerful models that are widely used. However, they are susceptible to evasion attacks where an adversary purposely constructs an adversarial example in order to elicit a misprediction from the model. This can degrade performance and erode a user’s trust in the model. Typically, approaches try to alleviate this problem by verifying how robust a learned ensemble is or robustifying the learning process. We take an alternative approach and attempt to detect adversarial examples in a post-deployment setting. We present a novel method for this task that works by analyzing an unseen example’s output configuration, which is the set of leaves activated by the example in the ensemble’s constituent trees. Our approach works with any additive tree ensemble and does not require training a separate model. We evaluate our approach on three different tree ensemble learners. We empirically show that our method is currently the best adversarial detection method for tree ensembles.
Cite
Text
Devos et al. "Detecting Evasion Attacks in Deployed Tree Ensembles." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2023. doi:10.1007/978-3-031-43424-2_8Markdown
[Devos et al. "Detecting Evasion Attacks in Deployed Tree Ensembles." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2023.](https://mlanthology.org/ecmlpkdd/2023/devos2023ecmlpkdd-detecting/) doi:10.1007/978-3-031-43424-2_8BibTeX
@inproceedings{devos2023ecmlpkdd-detecting,
title = {{Detecting Evasion Attacks in Deployed Tree Ensembles}},
author = {Devos, Laurens and Perini, Lorenzo and Meert, Wannes and Davis, Jesse},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2023},
pages = {120-136},
doi = {10.1007/978-3-031-43424-2_8},
url = {https://mlanthology.org/ecmlpkdd/2023/devos2023ecmlpkdd-detecting/}
}