Hardness of Learning a Single Neuron with Adversarial Label Noise
Abstract
We study the problem of distribution-free learning of a single neuron under adversarial label noise with respect to the squared loss. For a wide range of activation functions, including ReLUs and sigmoids, we prove hardness of learning results in the Statistical Query model and under a well-studied assumption on the complexity of refuting XOR formulas. Specifically, we establish that no polynomial-time learning algorithm, even improper, can approximate the optimal loss value within any constant factor.
Cite
Text
Diakonikolas et al. "Hardness of Learning a Single Neuron with Adversarial Label Noise." Artificial Intelligence and Statistics, 2022.Markdown
[Diakonikolas et al. "Hardness of Learning a Single Neuron with Adversarial Label Noise." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/diakonikolas2022aistats-hardness/)BibTeX
@inproceedings{diakonikolas2022aistats-hardness,
title = {{Hardness of Learning a Single Neuron with Adversarial Label Noise}},
author = {Diakonikolas, Ilias and Kane, Daniel and Manurangsi, Pasin and Ren, Lisheng},
booktitle = {Artificial Intelligence and Statistics},
year = {2022},
pages = {8199-8213},
volume = {151},
url = {https://mlanthology.org/aistats/2022/diakonikolas2022aistats-hardness/}
}