Tighter Bounds Lead to Improved Classifiers

Abstract

The standard approach to supervised classification involves the minimization of a log-loss as an upper bound to the classification error. While this is a tight bound early on in the optimization, it overemphasizes the influence of incorrectly classified examples far from the decision boundary. Updating the upper bound during the optimization leads to improved classification rates while transforming the learning into a sequence of minimization problems. In addition, in the context where the classifier is part of a larger system, this modification makes it possible to link the performance of the classifier to that of the whole system, allowing the seamless introduction of external constraints.

Cite

Text

Le Roux. "Tighter Bounds Lead to Improved Classifiers." International Conference on Learning Representations, 2017.

Markdown

[Le Roux. "Tighter Bounds Lead to Improved Classifiers." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/roux2017iclr-tighter/)

BibTeX

@inproceedings{roux2017iclr-tighter,
  title     = {{Tighter Bounds Lead to Improved Classifiers}},
  author    = {Le Roux, Nicolas},
  booktitle = {International Conference on Learning Representations},
  year      = {2017},
  url       = {https://mlanthology.org/iclr/2017/roux2017iclr-tighter/}
}