Generalized Boosting Algorithms for Convex Optimization
Abstract
Boosting is a popular way to derive powerful learners from simpler hypothesis classes. Following previous work (Mason et al., 1999; Friedman, 2000) on general boosting frameworks, we analyze gradient-based descent algorithms for boosting with respect to any convex objective and introduce a new measure of weak learner performance into this setting which generalizes existing work. We present the first weak to strong learning guarantees for the existing gradient boosting work for smooth convex objectives, and also demonstrate that this work fails for non-smooth objectives. To address this issue, we present new algorithms which extend this boosting approach to arbitrary convex loss functions and give corresponding weak to strong convergence results. In addition, we demonstrate experimental results that support our analysis and demonstrate the need for the new algorithms we present.
Cite
Text
Grubb and Bagnell. "Generalized Boosting Algorithms for Convex Optimization." International Conference on Machine Learning, 2011.Markdown
[Grubb and Bagnell. "Generalized Boosting Algorithms for Convex Optimization." International Conference on Machine Learning, 2011.](https://mlanthology.org/icml/2011/grubb2011icml-generalized/)BibTeX
@inproceedings{grubb2011icml-generalized,
title = {{Generalized Boosting Algorithms for Convex Optimization}},
author = {Grubb, Alexander and Bagnell, Drew},
booktitle = {International Conference on Machine Learning},
year = {2011},
pages = {1209-1216},
url = {https://mlanthology.org/icml/2011/grubb2011icml-generalized/}
}