Improved Scaling Laws in Linear Regression via Data Reuse
Abstract
Neural scaling laws suggest that the test error of large language models trained online decreases polynomially as the model size and data size increase. However, such scaling can be unsustainable when running out of new data. In this work, we show that data reuse can improve existing scaling laws in linear regression. Specifically, we derive sharp test error bounds on $M$-dimensional linear models trained by multi-pass *stochastic gradient descent* (multi-pass SGD) on $N$ data with sketched features. Assuming that the data covariance has a power-law spectrum of degree $a$, and that the true parameter follows a prior with an aligned power-law spectrum of degree $b-a$ (with $a > b > 1$), we show that multi-pass SGD achieves a test error of $\Theta(M^{1-b} + L^{(1-b)/a})$, where $L \lesssim N^{a/b}$ is the number of iterations. In the same setting, one-pass SGD only attains a test error of $\Theta(M^{1-b} + N^{(1-b)/a})$ (see, e.g., Lin et al., 2024). This suggests an improved scaling law via data reuse (i.e., choosing $L>N$) in data-constrained regimes. Numerical simulations are also provided to verify our theoretical findings.
Cite
Text
Lin et al. "Improved Scaling Laws in Linear Regression via Data Reuse." Advances in Neural Information Processing Systems, 2025.Markdown
[Lin et al. "Improved Scaling Laws in Linear Regression via Data Reuse." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/lin2025neurips-improved/)BibTeX
@inproceedings{lin2025neurips-improved,
title = {{Improved Scaling Laws in Linear Regression via Data Reuse}},
author = {Lin, Licong and Wu, Jingfeng and Bartlett, Peter},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/lin2025neurips-improved/}
}