New Bounds for Hyperparameter Tuning of Regression Problems Across Instances
Abstract
The task of tuning regularization coefficients in regularized regression models with provable guarantees across problem instances still poses a significant challenge in the literature. This paper investigates the sample complexity of tuning regularization parameters in linear and logistic regressions under $\ell_1$ and $\ell_2$-constraints in the data-driven setting. For the linear regression problem, by more carefully exploiting the structure of the dual function class, we provide a new upper bound for the pseudo-dimension of the validation loss function class, which significantly improves the best-known results on the problem. Remarkably, we also instantiate the first matching lower bound, proving our results are tight. For tuning the regularization parameters of logistic regression, we introduce a new approach to studying the learning guarantee via an approximation of the validation loss function class. We examine the pseudo-dimension of the approximation class and construct a uniform error bound between the validation loss function class and its approximation, which allows us to instantiate the first learning guarantee for the problem of tuning logistic regression regularization coefficients.
Cite
Text
Balcan et al. "New Bounds for Hyperparameter Tuning of Regression Problems Across Instances." Neural Information Processing Systems, 2023.Markdown
[Balcan et al. "New Bounds for Hyperparameter Tuning of Regression Problems Across Instances." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/balcan2023neurips-new/)BibTeX
@inproceedings{balcan2023neurips-new,
title = {{New Bounds for Hyperparameter Tuning of Regression Problems Across Instances}},
author = {Balcan, Maria-Florina F and Nguyen, Anh and Sharma, Dravyansh},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/balcan2023neurips-new/}
}