The Geometry of Losses

Abstract

Loss functions are central to machine learning because they are the means by which the quality of a prediction is evaluated. Any loss that is not proper, or can not be transformed to be proper via a link function is inadmissible. All admissible losses for n-class problems can be obtained in terms of a convex body in \mathbb{R}^n. We show this explicitly and show how some existing results simplify when viewed from this perspective. This allows the development of a rich algebra of losses induced by binary operations on convex bodies (that return a convex body). Furthermore it allows us to define an “inverse loss” which provides a universal “substitution function” for the Aggregating Algorithm. In doing so we show a formal connection between proper losses and norms.

Cite

Text

Williamson. "The Geometry of Losses." Annual Conference on Computational Learning Theory, 2014.

Markdown

[Williamson. "The Geometry of Losses." Annual Conference on Computational Learning Theory, 2014.](https://mlanthology.org/colt/2014/williamson2014colt-geometry/)

BibTeX

@inproceedings{williamson2014colt-geometry,
  title     = {{The Geometry of Losses}},
  author    = {Williamson, Robert C.},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2014},
  pages     = {1078-1108},
  url       = {https://mlanthology.org/colt/2014/williamson2014colt-geometry/}
}