Sharp Statistical Guaratees for Adversarially Robust Gaussian Classification
Abstract
Adversarial robustness has become a fundamental requirement in modern machine learning applications. Yet, there has been surprisingly little statistical understanding so far. In this paper, we provide the first result of the \emph{optimal} minimax guarantees for the excess risk for adversarially robust classification, under Gaussian mixture model proposed by \cite{schmidt2018adversarially}. The results are stated in terms of the \emph{Adversarial Signal-to-Noise Ratio (AdvSNR)}, which generalizes a similar notion for standard linear classification to the adversarial setting. For the Gaussian mixtures with AdvSNR value of $r$, we prove an excess risk lower bound of order $\Theta(e^{-(\frac{1}{2}+o(1)) r^2} \frac{d}{n})$ and design a computationally efficient estimator that achieves this optimal rate. Our results built upon minimal assumptions while cover a wide spectrum of adversarial perturbations including $\ell_p$ balls for any $p \ge 1$.
Cite
Text
Dan et al. "Sharp Statistical Guaratees for Adversarially Robust Gaussian Classification." International Conference on Machine Learning, 2020.Markdown
[Dan et al. "Sharp Statistical Guaratees for Adversarially Robust Gaussian Classification." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/dan2020icml-sharp/)BibTeX
@inproceedings{dan2020icml-sharp,
title = {{Sharp Statistical Guaratees for Adversarially Robust Gaussian Classification}},
author = {Dan, Chen and Wei, Yuting and Ravikumar, Pradeep},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {2345-2355},
volume = {119},
url = {https://mlanthology.org/icml/2020/dan2020icml-sharp/}
}