Approximate Is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity
Abstract
We present and study approximate notions of dimensional and margin complexity, which correspond to the minimal dimension or norm of an embedding required to {\em approximate}, rather then exactly represent, a given hypothesis class. We show that such notions are not only sufficient for learning using linear predictors or a kernel, but unlike the exact variants, are also necessary. Thus they are better suited for discussing limitations of linear or kernel methods.
Cite
Text
Kamath et al. "Approximate Is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity." Conference on Learning Theory, 2020.Markdown
[Kamath et al. "Approximate Is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity." Conference on Learning Theory, 2020.](https://mlanthology.org/colt/2020/kamath2020colt-approximate/)BibTeX
@inproceedings{kamath2020colt-approximate,
title = {{Approximate Is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity}},
author = {Kamath, Pritish and Montasser, Omar and Srebro, Nathan},
booktitle = {Conference on Learning Theory},
year = {2020},
pages = {2236-2262},
volume = {125},
url = {https://mlanthology.org/colt/2020/kamath2020colt-approximate/}
}