Boosted Kernel Ridge Regression: Optimal Learning Rates and Early Stopping

Abstract

In this paper, we introduce a learning algorithm, boosted kernel ridge regression (BKRR), that combines $L_2$-Boosting with the kernel ridge regression (KRR). We analyze the learning performance of this algorithm in the framework of learning theory. We show that BKRR provides a new bias-variance trade-off via tuning the number of boosting iterations, which is different from KRR via adjusting the regularization parameter. A (semi-)exponential bias-variance trade-off is derived for BKRR, exhibiting a stable relationship between the generalization error and the number of iterations. Furthermore, an adaptive stopping rule is proposed, with which BKRR achieves the optimal learning rate without saturation.

Cite

Text

Lin et al. "Boosted Kernel Ridge Regression: Optimal Learning Rates and Early Stopping." Journal of Machine Learning Research, 2019.

Markdown

[Lin et al. "Boosted Kernel Ridge Regression: Optimal Learning Rates and Early Stopping." Journal of Machine Learning Research, 2019.](https://mlanthology.org/jmlr/2019/lin2019jmlr-boosted/)

BibTeX

@article{lin2019jmlr-boosted,
  title     = {{Boosted Kernel Ridge Regression: Optimal Learning Rates and Early Stopping}},
  author    = {Lin, Shao-Bo and Lei, Yunwen and Zhou, Ding-Xuan},
  journal   = {Journal of Machine Learning Research},
  year      = {2019},
  pages     = {1-36},
  volume    = {20},
  url       = {https://mlanthology.org/jmlr/2019/lin2019jmlr-boosted/}
}