Surrogate Regret Bounds for Proper Losses

Abstract

We present tight surrogate regret bounds for the class of proper (i.e., Fisher consistent) losses. The bounds generalise the margin-based bounds due to Bartlett et al. (2006). The proof uses Taylor's theorem and leads to new representations for loss and regret and a simple proof of the integral representation of proper losses. We also present a different formulation of a duality result of Bregman divergences which leads to a demonstration of the convexity of composite losses using canonical link functions.

Cite

Text

Reid and Williamson. "Surrogate Regret Bounds for Proper Losses." International Conference on Machine Learning, 2009. doi:10.1145/1553374.1553489

Markdown

[Reid and Williamson. "Surrogate Regret Bounds for Proper Losses." International Conference on Machine Learning, 2009.](https://mlanthology.org/icml/2009/reid2009icml-surrogate/) doi:10.1145/1553374.1553489

BibTeX

@inproceedings{reid2009icml-surrogate,
  title     = {{Surrogate Regret Bounds for Proper Losses}},
  author    = {Reid, Mark D. and Williamson, Robert C.},
  booktitle = {International Conference on Machine Learning},
  year      = {2009},
  pages     = {897-904},
  doi       = {10.1145/1553374.1553489},
  url       = {https://mlanthology.org/icml/2009/reid2009icml-surrogate/}
}