Linear Hinge Loss and Average Margin
Abstract
We describe a unifying method for proving relative loss bounds for on(cid:173) line linear threshold classification algorithms, such as the Perceptron and the Winnow algorithms. For classification problems the discrete loss is used, i.e., the total number of prediction mistakes. We introduce a con(cid:173) tinuous loss function, called the "linear hinge loss", that can be employed to derive the updates of the algorithms. We first prove bounds w.r.t. the linear hinge loss and then convert them to the discrete loss. We intro(cid:173) duce a notion of "average margin" of a set of examples . We show how relative loss bounds based on the linear hinge loss can be converted to relative loss bounds i.t.o. the discrete loss using the average margin.
Cite
Text
Gentile and Warmuth. "Linear Hinge Loss and Average Margin." Neural Information Processing Systems, 1998.Markdown
[Gentile and Warmuth. "Linear Hinge Loss and Average Margin." Neural Information Processing Systems, 1998.](https://mlanthology.org/neurips/1998/gentile1998neurips-linear/)BibTeX
@inproceedings{gentile1998neurips-linear,
title = {{Linear Hinge Loss and Average Margin}},
author = {Gentile, Claudio and Warmuth, Manfred K.},
booktitle = {Neural Information Processing Systems},
year = {1998},
pages = {225-231},
url = {https://mlanthology.org/neurips/1998/gentile1998neurips-linear/}
}