Robust and Private Learning of Halfspaces

Abstract

In this work, we study the trade-off between differential privacy and adversarial robustness under $L_2$-perturbations in the context of learning halfspaces. We prove nearly tight bounds on the sample complexity of robust private learning of halfspaces for a large regime of parameters. A highlight of our results is that robust and private learning is harder than robust or private learning alone. We complement our theoretical analysis with experimental results on the MNIST and USPS datasets, for a learning algorithm that is both differentially private and adversarially robust.

Cite

Text

Ghazi et al. "Robust and Private Learning of Halfspaces." Artificial Intelligence and Statistics, 2021.

Markdown

[Ghazi et al. "Robust and Private Learning of Halfspaces." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/ghazi2021aistats-robust/)

BibTeX

@inproceedings{ghazi2021aistats-robust,
  title     = {{Robust and Private Learning of Halfspaces}},
  author    = {Ghazi, Badih and Kumar, Ravi and Manurangsi, Pasin and Nguyen, Thao},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2021},
  pages     = {1603-1611},
  volume    = {130},
  url       = {https://mlanthology.org/aistats/2021/ghazi2021aistats-robust/}
}