Unifying Lower Bounds on Prediction Dimension of Convex Surrogates

Abstract

The convex consistency dimension of a supervised learning task is the lowest prediction dimension $d$ such that there exists a convex surrogate $L : \mathbb{R}^d \times \mathcal Y \to \mathbb R$ that is consistent for the given task. We present a new tool based on property elicitation, $d$-flats, for lower-bounding convex consistency dimension. This tool unifies approaches from a variety of domains, including continuous and discrete prediction problems. We use $d$-flats to obtain a new lower bound on the convex consistency dimension of risk measures, resolving an open question due to Frongillo and Kash (NeurIPS 2015). In discrete prediction settings, we show that the $d$-flats approach recovers and even tightens previous lower bounds using feasible subspace dimension.

Cite

Text

Finocchiaro et al. "Unifying Lower Bounds on Prediction Dimension of Convex Surrogates." Neural Information Processing Systems, 2021.

Markdown

[Finocchiaro et al. "Unifying Lower Bounds on Prediction Dimension of Convex Surrogates." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/finocchiaro2021neurips-unifying/)

BibTeX

@inproceedings{finocchiaro2021neurips-unifying,
  title     = {{Unifying Lower Bounds on Prediction Dimension of Convex Surrogates}},
  author    = {Finocchiaro, Jessica and Frongillo, Rafael M. and Waggoner, Bo},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/finocchiaro2021neurips-unifying/}
}