On the Limitations of Embedding Methods

Abstract

We show that for any class of functions H which has a reasonable combinatorial dimension, the vast majority of small subsets of the combinatorial cube can not be represented as a Lipschitz image of a subset of H , unless the Lipschitz constant is very large. We apply this result to the case when H consists of linear functionals of norm at most one on a Hilbert space, and thus show that “most” classification problems can not be represented as a reasonable Lipschitz loss of a kernel class.

Cite

Text

Mendelson. "On the Limitations of Embedding Methods." Annual Conference on Computational Learning Theory, 2005. doi:10.1007/11503415_24

Markdown

[Mendelson. "On the Limitations of Embedding Methods." Annual Conference on Computational Learning Theory, 2005.](https://mlanthology.org/colt/2005/mendelson2005colt-limitations/) doi:10.1007/11503415_24

BibTeX

@inproceedings{mendelson2005colt-limitations,
  title     = {{On the Limitations of Embedding Methods}},
  author    = {Mendelson, Shahar},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2005},
  pages     = {353-365},
  doi       = {10.1007/11503415_24},
  url       = {https://mlanthology.org/colt/2005/mendelson2005colt-limitations/}
}