A Gradient-Based Boosting Algorithm for Regression Problems
Abstract
In adaptive boosting, several weak learners trained sequentially are combined to boost the overall algorithm performance. Re(cid:173) cently adaptive boosting methods for classification problems have been derived as gradient descent algorithms. This formulation jus(cid:173) tifies key elements and parameters in the methods, all chosen to optimize a single common objective function. We propose an anal(cid:173) ogous formulation for adaptive boosting of regression problems, utilizing a novel objective function that leads to a simple boosting algorithm. We prove that this method reduces training error, and compare its performance to other regression methods.
Cite
Text
Zemel and Pitassi. "A Gradient-Based Boosting Algorithm for Regression Problems." Neural Information Processing Systems, 2000.Markdown
[Zemel and Pitassi. "A Gradient-Based Boosting Algorithm for Regression Problems." Neural Information Processing Systems, 2000.](https://mlanthology.org/neurips/2000/zemel2000neurips-gradientbased/)BibTeX
@inproceedings{zemel2000neurips-gradientbased,
title = {{A Gradient-Based Boosting Algorithm for Regression Problems}},
author = {Zemel, Richard S. and Pitassi, Toniann},
booktitle = {Neural Information Processing Systems},
year = {2000},
pages = {696-702},
url = {https://mlanthology.org/neurips/2000/zemel2000neurips-gradientbased/}
}