On the Convergence of Leveraging
Abstract
We give an unified convergence analysis of ensemble learning meth- ods including e.g. AdaBoost, Logistic Regression and the Least-Square- Boost algorithm for regression. These methods have in common that they iteratively call a base learning algorithm which returns hypotheses that are then linearly combined. We show that these methods are related to the Gauss-Southwell method known from numerical optimization and state non-asymptotical convergence results for all these methods. Our analysis includes ` 1 -norm regularized cost functions leading to a clean and general way to regularize ensemble learning. 1 Introduction
Cite
Text
Rätsch et al. "On the Convergence of Leveraging." Neural Information Processing Systems, 2001.Markdown
[Rätsch et al. "On the Convergence of Leveraging." Neural Information Processing Systems, 2001.](https://mlanthology.org/neurips/2001/ratsch2001neurips-convergence/)BibTeX
@inproceedings{ratsch2001neurips-convergence,
title = {{On the Convergence of Leveraging}},
author = {Rätsch, Gunnar and Mika, Sebastian and Warmuth, Manfred K.},
booktitle = {Neural Information Processing Systems},
year = {2001},
pages = {487-494},
url = {https://mlanthology.org/neurips/2001/ratsch2001neurips-convergence/}
}