The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise

Abstract

We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent agnostic PAC model, with a focus on $L_p$ perturbations. We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem. An interesting implication of our findings is that the $L_{\infty}$ perturbations case is provably computationally harder than the case $2 \leq p < \infty$.

Cite

Text

Diakonikolas et al. "The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise." Neural Information Processing Systems, 2020.

Markdown

[Diakonikolas et al. "The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/diakonikolas2020neurips-complexity/)

BibTeX

@inproceedings{diakonikolas2020neurips-complexity,
  title     = {{The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise}},
  author    = {Diakonikolas, Ilias and Kane, Daniel M. and Manurangsi, Pasin},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/diakonikolas2020neurips-complexity/}
}