Robustly Learning a Single Neuron via Sharpness
Abstract
We study the problem of learning a single neuron with respect to the $L_2^2$-loss in the presence of adversarial label noise. We give an efficient algorithm that, for a broad family of activations including ReLUs, approximates the optimal $L_2^2$-error within a constant factor. Notably, our algorithm succeeds under much milder distributional assumptions compared to prior work. The key ingredient enabling our results is a novel connection to local error bounds from optimization theory.
Cite
Text
Wang et al. "Robustly Learning a Single Neuron via Sharpness." International Conference on Machine Learning, 2023.Markdown
[Wang et al. "Robustly Learning a Single Neuron via Sharpness." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/wang2023icml-robustly/)BibTeX
@inproceedings{wang2023icml-robustly,
title = {{Robustly Learning a Single Neuron via Sharpness}},
author = {Wang, Puqian and Zarifis, Nikos and Diakonikolas, Ilias and Diakonikolas, Jelena},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {36541-36577},
volume = {202},
url = {https://mlanthology.org/icml/2023/wang2023icml-robustly/}
}