Iterative Regularization for Learning with Convex Loss Functions
Abstract
We consider the problem of supervised learning with convex loss functions and propose a new form of iterative regularization based on the subgradient method. Unlike other regularization approaches, in iterative regularization no constraint or penalization is considered, and generalization is achieved by (early) stopping an empirical iteration. We consider a nonparametric setting, in the framework of reproducing kernel Hilbert spaces, and prove consistency and finite sample bounds on the excess risk under general regularity conditions. Our study provides a new class of efficient regularized learning algorithms and gives insights on the interplay between statistics and optimization in machine learning.
Cite
Text
Lin et al. "Iterative Regularization for Learning with Convex Loss Functions." Journal of Machine Learning Research, 2016.Markdown
[Lin et al. "Iterative Regularization for Learning with Convex Loss Functions." Journal of Machine Learning Research, 2016.](https://mlanthology.org/jmlr/2016/lin2016jmlr-iterative/)BibTeX
@article{lin2016jmlr-iterative,
title = {{Iterative Regularization for Learning with Convex Loss Functions}},
author = {Lin, Junhong and Rosasco, Lorenzo and Zhou, Ding-Xuan},
journal = {Journal of Machine Learning Research},
year = {2016},
pages = {1-38},
volume = {17},
url = {https://mlanthology.org/jmlr/2016/lin2016jmlr-iterative/}
}