Robustness for Non-Parametric Classification: A Generic Attack and Defense

Abstract

Adversarially robust machine learning has received much recent attention. However, prior attacks and defenses for non-parametric classifiers have been developed in an ad-hoc or classifier-specific basis. In this work, we take a holistic look at adversarial examples for non-parametric classifiers, including nearest neighbors, decision trees, and random forests. We provide a general defense method, adversarial pruning, that works by preprocessing the dataset to become well-separated. To test our defense, we provide a novel attack that applies to a wide range of non-parametric classifiers. Theoretically, we derive an optimally robust classifier, which is analogous to the Bayes Optimal. We show that adversarial pruning can be viewed as a finite sample approximation to this optimal classifier. We empirically show that our defense and attack are either better than or competitive with prior work on non-parametric classifiers. Overall, our results provide a strong and broadly-applicable baseline for future work on robust non-parametrics.

Cite

Text

Yang et al. "Robustness for Non-Parametric Classification: A Generic Attack and Defense." Artificial Intelligence and Statistics, 2020.

Markdown

[Yang et al. "Robustness for Non-Parametric Classification: A Generic Attack and Defense." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/yang2020aistats-robustness/)

BibTeX

@inproceedings{yang2020aistats-robustness,
  title     = {{Robustness for Non-Parametric Classification: A Generic Attack and Defense}},
  author    = {Yang, Yao-Yuan and Rashtchian, Cyrus and Wang, Yizhen and Chaudhuri, Kamalika},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2020},
  pages     = {941-951},
  volume    = {108},
  url       = {https://mlanthology.org/aistats/2020/yang2020aistats-robustness/}
}