Reducing Training Time by Efficient Localized Kernel Regression

Abstract

We study generalization properties of kernel regularized least squares regression based on a partitioning approach. We show that optimal rates of convergence are preserved if the number of local sets grows sufficiently slowly with the sample size. Moreover, the partitioning approach can be efficiently combined with local Nyström subsampling, improving computational cost twofold.

Cite

Text

Müecke. "Reducing Training Time by Efficient Localized Kernel Regression." Artificial Intelligence and Statistics, 2019.

Markdown

[Müecke. "Reducing Training Time by Efficient Localized Kernel Regression." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/muecke2019aistats-reducing/)

BibTeX

@inproceedings{muecke2019aistats-reducing,
  title     = {{Reducing Training Time by Efficient Localized Kernel Regression}},
  author    = {Müecke, Nicole},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2019},
  pages     = {2603-2610},
  volume    = {89},
  url       = {https://mlanthology.org/aistats/2019/muecke2019aistats-reducing/}
}