Sparsity of SVMs That Use the Epsilon-Insensitive Loss
Abstract
In this paper lower and upper bounds for the number of support vectors are derived for support vector machines (SVMs) based on the epsilon-insensitive loss function. It turns out that these bounds are asymptotically tight under mild assumptions on the data generating distribution. Finally, we briefly discuss a trade-off in epsilon between sparsity and accuracy if the SVM is used to estimate the conditional median.
Cite
Text
Steinwart and Christmann. "Sparsity of SVMs That Use the Epsilon-Insensitive Loss." Neural Information Processing Systems, 2008.Markdown
[Steinwart and Christmann. "Sparsity of SVMs That Use the Epsilon-Insensitive Loss." Neural Information Processing Systems, 2008.](https://mlanthology.org/neurips/2008/steinwart2008neurips-sparsity/)BibTeX
@inproceedings{steinwart2008neurips-sparsity,
title = {{Sparsity of SVMs That Use the Epsilon-Insensitive Loss}},
author = {Steinwart, Ingo and Christmann, Andreas},
booktitle = {Neural Information Processing Systems},
year = {2008},
pages = {1569-1576},
url = {https://mlanthology.org/neurips/2008/steinwart2008neurips-sparsity/}
}