Minimizing the Misclassification Error Rate Using a Surrogate Convex Loss
Abstract
We carefully study how well minimizing convex surrogate loss functions corresponds to minimizing the misclassification error rate for the problem of binary classification with linear predictors. We consider the agnostic setting, and investigate guarantees on the misclassification error of the loss-minimizer in terms of the margin error rate of the best predictor. We show that, aiming for such a guarantee, the hinge loss is essentially optimal among all convex losses.
Cite
Text
Ben-David et al. "Minimizing the Misclassification Error Rate Using a Surrogate Convex Loss." International Conference on Machine Learning, 2012.Markdown
[Ben-David et al. "Minimizing the Misclassification Error Rate Using a Surrogate Convex Loss." International Conference on Machine Learning, 2012.](https://mlanthology.org/icml/2012/bendavid2012icml-minimizing/)BibTeX
@inproceedings{bendavid2012icml-minimizing,
title = {{Minimizing the Misclassification Error Rate Using a Surrogate Convex Loss}},
author = {Ben-David, Shai and Loker, David and Srebro, Nathan and Sridharan, Karthik},
booktitle = {International Conference on Machine Learning},
year = {2012},
url = {https://mlanthology.org/icml/2012/bendavid2012icml-minimizing/}
}