A Theoretical Analysis of the Test Error of Finite-Rank Kernel Ridge Regression

Abstract

Existing statistical learning guarantees for general kernel regressors often yield loose bounds when used with finite-rank kernels. Yet, finite-rank kernels naturally appear in a number of machine learning problems, e.g. when fine-tuning a pre-trained deep neural network's last layer to adapt it to a novel task when performing transfer learning. We address this gap for finite-rank kernel ridge regression (KRR) by deriving sharp non-asymptotic upper and lower bounds for the KRR test error of any finite-rank KRR. Our bounds are tighter than previously derived bounds on finite-rank KRR and, unlike comparable results, they also remain valid for any regularization parameters.

Cite

Text

Cheng et al. "A Theoretical Analysis of the Test Error of Finite-Rank Kernel Ridge Regression." Neural Information Processing Systems, 2023.

Markdown

[Cheng et al. "A Theoretical Analysis of the Test Error of Finite-Rank Kernel Ridge Regression." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/cheng2023neurips-theoretical/)

BibTeX

@inproceedings{cheng2023neurips-theoretical,
  title     = {{A Theoretical Analysis of the Test Error of Finite-Rank Kernel Ridge Regression}},
  author    = {Cheng, Tin Sum and Lucchi, Aurelien and Kratsios, Anastasis and Dokmanić, Ivan and Belius, David},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/cheng2023neurips-theoretical/}
}