Averaging Regularized Estimators
Abstract
We compare the performance of averaged regularized estimators. We show that the improvement in performance that can be achieved by averaging depends critically on the degree of regularization which is used in training the individual estimators. We compare four different averaging approaches: simple averaging, bagging, variance-based weighting, and variance-based bagging. In any of the averaging methods, the greatest degree of improvementif compared to the individual estimatorsis achieved if no or only a small degree of regularization is used. Here, variance-based weighting and variance-based bagging are superior to simple averaging or bagging. Our experiments indicate that better performance for both individual estimators and for averaging is achieved in combination with regularization. With increasing degrees of regularization, the two bagging-based approaches (bagging and variance-based bagging) outperform the individual estimators, simple averaging, and variance-based weighting. Bagging and variance-based bagging seem to be the overall best combining methods over a wide range of degrees of regularization.
Cite
Text
Taniguchi and Tresp. "Averaging Regularized Estimators." Neural Computation, 1997. doi:10.1162/NECO.1997.9.5.1163Markdown
[Taniguchi and Tresp. "Averaging Regularized Estimators." Neural Computation, 1997.](https://mlanthology.org/neco/1997/taniguchi1997neco-averaging/) doi:10.1162/NECO.1997.9.5.1163BibTeX
@article{taniguchi1997neco-averaging,
title = {{Averaging Regularized Estimators}},
author = {Taniguchi, Michiaki and Tresp, Volker},
journal = {Neural Computation},
year = {1997},
pages = {1163-1178},
doi = {10.1162/NECO.1997.9.5.1163},
volume = {9},
url = {https://mlanthology.org/neco/1997/taniguchi1997neco-averaging/}
}