On the Error Resistance of Hinge-Loss Minimization
Abstract
Commonly used classification algorithms in machine learning, such as support vector machines, minimize a convex surrogate loss on training examples. In practice, these algorithms are surprisingly robust to errors in the training data. In this work, we identify a set of conditions on the data under which such surrogate loss minimization algorithms provably learn the correct classifier. This allows us to establish, in a unified framework, the robustness of these algorithms under various models on data as well as error. In particular, we show that if the data is linearly classifiable with a slightly non-trivial margin (i.e. a margin at least $C\div\sqrt{d}$ for $d$-dimensional unit vectors), and the class-conditional distributions are near isotropic and logconcave, then surrogate loss minimization has negligible error on the uncorrupted data even when a constant fraction of examples are adversarially mislabeled.
Cite
Text
Talwar. "On the Error Resistance of Hinge-Loss Minimization." Neural Information Processing Systems, 2020.Markdown
[Talwar. "On the Error Resistance of Hinge-Loss Minimization." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/talwar2020neurips-error/)BibTeX
@inproceedings{talwar2020neurips-error,
title = {{On the Error Resistance of Hinge-Loss Minimization}},
author = {Talwar, Kunal},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/talwar2020neurips-error/}
}