Adversarially Robust Decision Tree Relabeling
Abstract
Decision trees are popular models for their interpretation properties and their success in ensemble models for structured data. However, common decision tree learning algorithms produce models that suffer from adversarial examples. Recent work on robust decision tree learning mitigates this issue by taking adversarial perturbations into account during training. While these methods generate robust shallow trees, their relative quality reduces when training deeper trees due the methods being greedy. In this work we propose robust relabeling, a post-learning procedure that optimally changes the prediction labels of decision tree leaves to maximize adversarial robustness. We show this can be achieved in polynomial time in terms of the number of samples and leaves. Our results on 10 datasets show a significant improvement in adversarial accuracy both for single decision trees and tree ensembles. Decision trees and random forests trained with a state-of-the-art robust learning algorithm also benefited from robust relabeling.
Cite
Text
Vos and Verwer. "Adversarially Robust Decision Tree Relabeling." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2022. doi:10.1007/978-3-031-26409-2_13Markdown
[Vos and Verwer. "Adversarially Robust Decision Tree Relabeling." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2022.](https://mlanthology.org/ecmlpkdd/2022/vos2022ecmlpkdd-adversarially/) doi:10.1007/978-3-031-26409-2_13BibTeX
@inproceedings{vos2022ecmlpkdd-adversarially,
title = {{Adversarially Robust Decision Tree Relabeling}},
author = {Vos, Daniël and Verwer, Sicco},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2022},
pages = {203-218},
doi = {10.1007/978-3-031-26409-2_13},
url = {https://mlanthology.org/ecmlpkdd/2022/vos2022ecmlpkdd-adversarially/}
}