Learning with Symmetric Label Noise: The Importance of Being Unhinged

Abstract

Convex potential minimisation is the de facto approach to binary classification. However, Long and Servedio [2008] proved that under symmetric label noise (SLN), minimisation of any convex potential over a linear function class can result in classification performance equivalent to random guessing. This ostensibly shows that convex losses are not SLN-robust. In this paper, we propose a convex, classification-calibrated loss and prove that it is SLN-robust. The loss avoids the Long and Servedio [2008] result by virtue of being negatively unbounded. The loss is a modification of the hinge loss, where one does not clamp at zero; hence, we call it the unhinged loss. We show that the optimal unhinged solution is equivalent to that of a strongly regularised SVM, and is the limiting solution for any convex potential; this implies that strong l2 regularisation makes most standard learners SLN-robust. Experiments confirm the unhinged loss’ SLN-robustness.

Cite

Text

van Rooyen et al. "Learning with Symmetric Label Noise: The Importance of Being Unhinged." Neural Information Processing Systems, 2015.

Markdown

[van Rooyen et al. "Learning with Symmetric Label Noise: The Importance of Being Unhinged." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/vanrooyen2015neurips-learning/)

BibTeX

@inproceedings{vanrooyen2015neurips-learning,
  title     = {{Learning with Symmetric Label Noise: The Importance of Being Unhinged}},
  author    = {van Rooyen, Brendan and Menon, Aditya and Williamson, Robert C.},
  booktitle = {Neural Information Processing Systems},
  year      = {2015},
  pages     = {10-18},
  url       = {https://mlanthology.org/neurips/2015/vanrooyen2015neurips-learning/}
}