Stacked Regressions

Abstract

Stacking regressions is a method for forming linear combinations of different predictors to give improved prediction accuracy. The idea is to use cross-validation data and least squares under non-negativity constraints to determine the coefficients in the combination. Its effectiveness is demonstrated in stacking regression trees of different sizes and in a simulation stacking linear subset and ridge regressions. Reasons why this method works are explored. The idea of stacking originated with Wolpert (1992).

Cite

Text

Breiman. "Stacked Regressions." Machine Learning, 1996. doi:10.1007/BF00117832

Markdown

[Breiman. "Stacked Regressions." Machine Learning, 1996.](https://mlanthology.org/mlj/1996/breiman1996mlj-stacked/) doi:10.1007/BF00117832

BibTeX

@article{breiman1996mlj-stacked,
  title     = {{Stacked Regressions}},
  author    = {Breiman, Leo},
  journal   = {Machine Learning},
  year      = {1996},
  pages     = {49-64},
  doi       = {10.1007/BF00117832},
  volume    = {24},
  url       = {https://mlanthology.org/mlj/1996/breiman1996mlj-stacked/}
}