Less Is More: Nyström Computational Regularization
Abstract
We study Nyström type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered. In particular, we prove that these approaches can achieve optimal learning bounds, provided the subsampling level is suitably chosen. These results suggest a simple incremental variant of Nyström kernel ridge regression, where the subsampling level controls at the same time regularization and computations. Extensive experimental analysis shows that the considered approach achieves state of the art performances on benchmark large scale datasets.
Cite
Text
Rudi et al. "Less Is More: Nyström Computational Regularization." Neural Information Processing Systems, 2015.Markdown
[Rudi et al. "Less Is More: Nyström Computational Regularization." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/rudi2015neurips-less/)BibTeX
@inproceedings{rudi2015neurips-less,
title = {{Less Is More: Nyström Computational Regularization}},
author = {Rudi, Alessandro and Camoriano, Raffaello and Rosasco, Lorenzo},
booktitle = {Neural Information Processing Systems},
year = {2015},
pages = {1657-1665},
url = {https://mlanthology.org/neurips/2015/rudi2015neurips-less/}
}