Leveraging for Regression

Abstract

In this paper we examine master regression algorithms that leverage base regressors by iteratively calling them on modified samples. The most successful leveraging algorithm for classification is AdaBoost, an algorithm that requires only modest assumptions on the base learning method for its good theoretical bounds. We present three gradient descent leveraging algorithms for regression and prove AdaBoost-style bounds on their sample error using intuitive assumptions on the base learners. We derive bounds on the size of the master functions that lead to PAC-style bounds on the generalization error.

Cite

Text

Duffy and Helmbold. "Leveraging for Regression." Annual Conference on Computational Learning Theory, 2000.

Markdown

[Duffy and Helmbold. "Leveraging for Regression." Annual Conference on Computational Learning Theory, 2000.](https://mlanthology.org/colt/2000/duffy2000colt-leveraging/)

BibTeX

@inproceedings{duffy2000colt-leveraging,
  title     = {{Leveraging for Regression}},
  author    = {Duffy, Nigel and Helmbold, David P.},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2000},
  pages     = {208-219},
  url       = {https://mlanthology.org/colt/2000/duffy2000colt-leveraging/}
}