Early Stopping for Iterative Regularization with General Loss Functions
Abstract
In this paper, we investigate the early stopping strategy for the iterative regularization technique, which is based on gradient descent of convex loss functions in reproducing kernel Hilbert spaces without an explicit regularization term. This work shows that projecting the last iterate of the stopping time produces an estimator that can improve the generalization ability. Using the upper bound of the generalization errors, we establish a close link between the iterative regularization and Tikhonov regularization scheme and explain theoretically why the two schemes have similar regularization paths in the existing numerical simulations. We introduce a data-dependent way based on cross-validation to select the stopping time. We prove that the a-posteriori selection way can retain the comparable generalization errors to those obtained by our stopping rules with a-prior parameters.
Cite
Text
Hu and Lei. "Early Stopping for Iterative Regularization with General Loss Functions." Journal of Machine Learning Research, 2022.Markdown
[Hu and Lei. "Early Stopping for Iterative Regularization with General Loss Functions." Journal of Machine Learning Research, 2022.](https://mlanthology.org/jmlr/2022/hu2022jmlr-early/)BibTeX
@article{hu2022jmlr-early,
title = {{Early Stopping for Iterative Regularization with General Loss Functions}},
author = {Hu, Ting and Lei, Yunwen},
journal = {Journal of Machine Learning Research},
year = {2022},
pages = {1-36},
volume = {23},
url = {https://mlanthology.org/jmlr/2022/hu2022jmlr-early/}
}