Optimal Learning Rates for Least Squares SVMs Using Gaussian Kernels
Abstract
We prove a new oracle inequality for support vector machines with Gaussian RBF kernels solving the regularized least squares regression problem. To this end, we apply the modulus of smoothness. With the help of the new oracle inequality we then derive learning rates that can also be achieved by a simple data-dependent parameter selection method. Finally, it turns out that our learning rates are asymptotically optimal for regression functions satisfying certain standard smoothness conditions.
Cite
Text
Eberts and Steinwart. "Optimal Learning Rates for Least Squares SVMs Using Gaussian Kernels." Neural Information Processing Systems, 2011.Markdown
[Eberts and Steinwart. "Optimal Learning Rates for Least Squares SVMs Using Gaussian Kernels." Neural Information Processing Systems, 2011.](https://mlanthology.org/neurips/2011/eberts2011neurips-optimal/)BibTeX
@inproceedings{eberts2011neurips-optimal,
title = {{Optimal Learning Rates for Least Squares SVMs Using Gaussian Kernels}},
author = {Eberts, Mona and Steinwart, Ingo},
booktitle = {Neural Information Processing Systems},
year = {2011},
pages = {1539-1547},
url = {https://mlanthology.org/neurips/2011/eberts2011neurips-optimal/}
}