An Embedding Framework for Consistent Polyhedral Surrogates

Abstract

We formalize and study the natural approach of designing convex surrogate loss functions via embeddings for problems such as classification or ranking. In this approach, one embeds each of the finitely many predictions (e.g. classes) as a point in \reals^d, assigns the original loss values to these points, and convexifies the loss in some way to obtain a surrogate. We prove that this approach is equivalent, in a strong sense, to working with polyhedral (piecewise linear convex) losses. Moreover, given any polyhedral loss L, we give a construction of a link function through which L is a consistent surrogate for the loss it embeds. We go on to illustrate the power of this embedding framework with succinct proofs of consistency or inconsistency of various polyhedral surrogates in the literature.

Cite

Text

Finocchiaro et al. "An Embedding Framework for Consistent Polyhedral Surrogates." Neural Information Processing Systems, 2019.

Markdown

[Finocchiaro et al. "An Embedding Framework for Consistent Polyhedral Surrogates." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/finocchiaro2019neurips-embedding/)

BibTeX

@inproceedings{finocchiaro2019neurips-embedding,
  title     = {{An Embedding Framework for Consistent Polyhedral Surrogates}},
  author    = {Finocchiaro, Jessica and Frongillo, Rafael and Waggoner, Bo},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {10781-10791},
  url       = {https://mlanthology.org/neurips/2019/finocchiaro2019neurips-embedding/}
}