Model Averaging Is Asymptotically Better than Model Selection for Prediction
Abstract
We compare the performance of six model average predictors---Mallows' model averaging, stacking, Bayes model averaging, bagging, random forests, and boosting---to the components used to form them. In all six cases we identify conditions under which the model average predictor is consistent for its intended limit and performs as well or better than any of its components asymptotically. This is well known empirically, especially for complex problems, although theoretical results do not seem to have been formally established. We have focused our attention on the regression context since that is wheremodel averaging techniques differ most often from current practice.
Cite
Text
Le and Clarke. "Model Averaging Is Asymptotically Better than Model Selection for Prediction." Journal of Machine Learning Research, 2022.Markdown
[Le and Clarke. "Model Averaging Is Asymptotically Better than Model Selection for Prediction." Journal of Machine Learning Research, 2022.](https://mlanthology.org/jmlr/2022/le2022jmlr-model/)BibTeX
@article{le2022jmlr-model,
title = {{Model Averaging Is Asymptotically Better than Model Selection for Prediction}},
author = {Le, Tri M. and Clarke, Bertrand S.},
journal = {Journal of Machine Learning Research},
year = {2022},
pages = {1-53},
volume = {23},
url = {https://mlanthology.org/jmlr/2022/le2022jmlr-model/}
}