Kernel Regression in High Dimensions: Refined Analysis Beyond Double Descent
Abstract
In this paper, we provide a precise characterization of generalization properties of high dimensional kernel ridge regression across the under- and over-parameterized regimes, depending on whether the number of training data n exceeds the feature dimension d. By establishing a bias-variance decomposition of the expected excess risk, we show that, while the bias is (almost) independent of d and monotonically decreases with n, the variance depends on n, d and can be unimodal or monotonically decreasing under different regularization schemes. Our refined analysis goes beyond the double descent theory by showing that, depending on the data eigen-profile and the level of regularization, the kernel regression risk curve can be a double-descent-like, bell-shaped, or monotonic function of n. Experiments on synthetic and real data are conducted to support our theoretical findings.
Cite
Text
Liu et al. " Kernel Regression in High Dimensions: Refined Analysis Beyond Double Descent ." Artificial Intelligence and Statistics, 2021.Markdown
[Liu et al. " Kernel Regression in High Dimensions: Refined Analysis Beyond Double Descent ." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/liu2021aistats-kernel/)BibTeX
@inproceedings{liu2021aistats-kernel,
title = {{ Kernel Regression in High Dimensions: Refined Analysis Beyond Double Descent }},
author = {Liu, Fanghui and Liao, Zhenyu and Suykens, Johan},
booktitle = {Artificial Intelligence and Statistics},
year = {2021},
pages = {649-657},
volume = {130},
url = {https://mlanthology.org/aistats/2021/liu2021aistats-kernel/}
}