Sparseness of Support Vector Machines---Some Asymptotically Sharp Bounds

Abstract

The decision functions constructed by support vector machines (SVM’s) usually depend only on a subset of the training set—the so-called support vectors. We derive asymptotically sharp lower and upper bounds on the number of support vectors for several standard types of SVM’s. In par- ticular, we show for the Gaussian RBF kernel that the fraction of support vectors tends to twice the Bayes risk for the L1-SVM, to the probability of noise for the L2-SVM, and to 1 for the LS-SVM.

Cite

Text

Steinwart. "Sparseness of Support Vector Machines---Some Asymptotically Sharp Bounds." Neural Information Processing Systems, 2003.

Markdown

[Steinwart. "Sparseness of Support Vector Machines---Some Asymptotically Sharp Bounds." Neural Information Processing Systems, 2003.](https://mlanthology.org/neurips/2003/steinwart2003neurips-sparseness/)

BibTeX

@inproceedings{steinwart2003neurips-sparseness,
  title     = {{Sparseness of Support Vector Machines---Some Asymptotically Sharp Bounds}},
  author    = {Steinwart, Ingo},
  booktitle = {Neural Information Processing Systems},
  year      = {2003},
  pages     = {1069-1076},
  url       = {https://mlanthology.org/neurips/2003/steinwart2003neurips-sparseness/}
}