Online Gradient Boosting
Abstract
We extend the theory of boosting for regression problems to the online learning setting. Generalizing from the batch setting for boosting, the notion of a weak learning algorithm is modeled as an online learning algorithm with linear loss functions that competes with a base class of regression functions, while a strong learning algorithm is an online learning algorithm with smooth convex loss functions that competes with a larger class of regression functions. Our main result is an online gradient boosting algorithm which converts a weak online learning algorithm into a strong one where the larger class of functions is the linear span of the base class. We also give a simpler boosting algorithm that converts a weak online learning algorithm into a strong one where the larger class of functions is the convex hull of the base class, and prove its optimality.
Cite
Text
Beygelzimer et al. "Online Gradient Boosting." Neural Information Processing Systems, 2015.Markdown
[Beygelzimer et al. "Online Gradient Boosting." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/beygelzimer2015neurips-online/)BibTeX
@inproceedings{beygelzimer2015neurips-online,
title = {{Online Gradient Boosting}},
author = {Beygelzimer, Alina and Hazan, Elad and Kale, Satyen and Luo, Haipeng},
booktitle = {Neural Information Processing Systems},
year = {2015},
pages = {2458-2466},
url = {https://mlanthology.org/neurips/2015/beygelzimer2015neurips-online/}
}