Embedding Dimension of Polyhedral Losses

Abstract

A common technique in supervised learning with discrete losses, such as 0-1 loss, is to optimize a convex surrogate loss over Rd, calibrated with respect to the original loss. In particular, recent work has investigated embedding the original predictions (e.g. labels) as points in Rd, showing an equivalence to using polyhedral surrogates. In this work, we study the notion of the embedding dimension of a given discrete loss: the minimum dimension d such that an embedding exists. We characterize d-embeddability for all d, with a particularly tight characterization for d=1 (embedding into the real line), and useful necessary conditions for d>1 in the form of a quadratic feasibility program. We illustrate our results with novel lower bounds for abstain loss.

Cite

Text

Finocchiaro et al. "Embedding Dimension of Polyhedral Losses." Conference on Learning Theory, 2020.

Markdown

[Finocchiaro et al. "Embedding Dimension of Polyhedral Losses." Conference on Learning Theory, 2020.](https://mlanthology.org/colt/2020/finocchiaro2020colt-embedding/)

BibTeX

@inproceedings{finocchiaro2020colt-embedding,
  title     = {{Embedding Dimension of Polyhedral Losses}},
  author    = {Finocchiaro, Jessie and Frongillo, Rafael and Waggoner, Bo},
  booktitle = {Conference on Learning Theory},
  year      = {2020},
  pages     = {1558-1585},
  volume    = {125},
  url       = {https://mlanthology.org/colt/2020/finocchiaro2020colt-embedding/}
}