Under-Parameterized Double Descent for Ridge Regularized Least Squares Denoising of Data on a Line
Abstract
In this paper, we present a simple example that provably exhibits double descent in the under-parameterized regime. For simplicity, we look at the ridge regularized least squares denoising problem with data on a line embedded in high-dimension space. By deriving an asymptotically accurate formula for the generalization error, we observe sample-wise and parameter-wise double descent with the peak in the under-parameterized regime rather than at the interpolation point or in the over-parameterized regime. Further, the peak of the sample-wise double descent curve corresponds to a peak in the curve for the norm of the estimator, and adjusting $\mu$, the strength of the ridge regularization, shifts the location of the peak. We observe that parameter-wise double descent occurs for this model for small $\mu$. For larger values of $\mu$, we observe that the curve for the norm of the estimator has a peak but that this no longer translates to a peak in the generalization error.
Cite
Text
Sonthalia et al. "Under-Parameterized Double Descent for Ridge Regularized Least Squares Denoising of Data on a Line." NeurIPS 2023 Workshops: M3L, 2023.Markdown
[Sonthalia et al. "Under-Parameterized Double Descent for Ridge Regularized Least Squares Denoising of Data on a Line." NeurIPS 2023 Workshops: M3L, 2023.](https://mlanthology.org/neuripsw/2023/sonthalia2023neuripsw-underparameterized/)BibTeX
@inproceedings{sonthalia2023neuripsw-underparameterized,
title = {{Under-Parameterized Double Descent for Ridge Regularized Least Squares Denoising of Data on a Line}},
author = {Sonthalia, Rishi and Li, Xinyue and Gu, Bochao},
booktitle = {NeurIPS 2023 Workshops: M3L},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/sonthalia2023neuripsw-underparameterized/}
}