Optimal Rates for Regularized Least Squares Regression
Abstract
We establish a new oracle inequality for kernelbased, regularized least squares regression methods, which uses the eigenvalues of the associated integral operator as a complexity measure. We then use this oracle inequality to derive learning rates for these methods. Here, it turns out that these rates are independent of the exponent of the regularization term. Finally, we show that our learning rates are asymptotically optimal whenever, e.g., the kernel is continuous and the input space is a compact metric space.
Cite
Text
Steinwart et al. "Optimal Rates for Regularized Least Squares Regression." Annual Conference on Computational Learning Theory, 2009.Markdown
[Steinwart et al. "Optimal Rates for Regularized Least Squares Regression." Annual Conference on Computational Learning Theory, 2009.](https://mlanthology.org/colt/2009/steinwart2009colt-optimal/)BibTeX
@inproceedings{steinwart2009colt-optimal,
title = {{Optimal Rates for Regularized Least Squares Regression}},
author = {Steinwart, Ingo and Hush, Don R. and Scovel, Clint},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2009},
url = {https://mlanthology.org/colt/2009/steinwart2009colt-optimal/}
}