Boosting Methods for Regression

Abstract

In this paper we examine ensemble methods for regression that leverage or “boost” base regressors by iteratively calling them on modified samples. The most successful leveraging algorithm for classification is AdaBoost, an algorithm that requires only modest assumptions on the base learning method for its strong theoretical guarantees. We present several gradient descent leveraging algorithms for regression and prove AdaBoost-style bounds on their sample errors using intuitive assumptions on the base learners. We bound the complexity of the regression functions produced in order to derive PAC-style bounds on their generalization errors. Experiments validate our theoretical results.

Cite

Text

Duffy and Helmbold. "Boosting Methods for Regression." Machine Learning, 2002. doi:10.1023/A:1013685603443

Markdown

[Duffy and Helmbold. "Boosting Methods for Regression." Machine Learning, 2002.](https://mlanthology.org/mlj/2002/duffy2002mlj-boosting/) doi:10.1023/A:1013685603443

BibTeX

@article{duffy2002mlj-boosting,
  title     = {{Boosting Methods for Regression}},
  author    = {Duffy, Nigel and Helmbold, David P.},
  journal   = {Machine Learning},
  year      = {2002},
  pages     = {153-200},
  doi       = {10.1023/A:1013685603443},
  volume    = {47},
  url       = {https://mlanthology.org/mlj/2002/duffy2002mlj-boosting/}
}