Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces and ReLU Regression Under Gaussian Marginals

Abstract

We study the task of agnostically learning halfspaces under the Gaussian distribution. Specifically, given labeled examples $(\\mathbf{x},y)$ from an unknown distribution on $\\mathbb{R}^n \\times \\{\pm 1 \\}$, whose marginal distribution on $\\mathbf{x}$ is the standard Gaussian and the labels $y$ can be arbitrary, the goal is to output a hypothesis with 0-1 loss $\\mathrm{OPT}+\\epsilon$, where $\\mathrm{OPT}$ is the 0-1 loss of the best-fitting halfspace. We prove a near-optimal computational hardness result for this task, under the widely believed sub-exponential time hardness of the Learning with Errors (LWE) problem. Prior hardness results are either qualitatively suboptimal or apply to restricted families of algorithms. Our techniques extend to yield near-optimal lower bounds for related problems, including ReLU regression.

Cite

Text

Diakonikolas et al. "Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces and ReLU Regression Under Gaussian Marginals." International Conference on Machine Learning, 2023.

Markdown

[Diakonikolas et al. "Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces and ReLU Regression Under Gaussian Marginals." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/diakonikolas2023icml-nearoptimal/)

BibTeX

@inproceedings{diakonikolas2023icml-nearoptimal,
  title     = {{Near-Optimal Cryptographic Hardness of Agnostically Learning Halfspaces and ReLU Regression Under Gaussian Marginals}},
  author    = {Diakonikolas, Ilias and Kane, Daniel and Ren, Lisheng},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {7922-7938},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/diakonikolas2023icml-nearoptimal/}
}