Fast Rates for Support Vector Machines

Abstract

We establish learning rates to the Bayes risk for support vector machines (SVMs) using a regularization sequence ${\it \lambda}_{n}={\it n}^{-\rm \alpha}$ , where ${\it \alpha}\in$ (0,1) is arbitrary. Under a noise condition recently proposed by Tsybakov these rates can become faster than n ^− 1/2. In order to deal with the approximation error we present a general concept called the approximation error function which describes how well the infinite sample versions of the considered SVMs approximate the data-generating distribution. In addition we discuss in some detail the relation between the “classical” approximation error and the approximation error function. Finally, for distributions satisfying a geometric noise assumption we establish some learning rates when the used RKHS is a Sobolev space.

Cite

Text

Steinwart and Scovel. "Fast Rates for Support Vector Machines." Annual Conference on Computational Learning Theory, 2005. doi:10.1007/11503415_19

Markdown

[Steinwart and Scovel. "Fast Rates for Support Vector Machines." Annual Conference on Computational Learning Theory, 2005.](https://mlanthology.org/colt/2005/steinwart2005colt-fast/) doi:10.1007/11503415_19

BibTeX

@inproceedings{steinwart2005colt-fast,
  title     = {{Fast Rates for Support Vector Machines}},
  author    = {Steinwart, Ingo and Scovel, Clint},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2005},
  pages     = {279-294},
  doi       = {10.1007/11503415_19},
  url       = {https://mlanthology.org/colt/2005/steinwart2005colt-fast/}
}