Variance Optimized Bagging
Abstract
We propose and study a new technique for aggregating an ensemble of bootstrapped classifiers. In this method we seek a linear combination of the base-classifiers such that the weights are optimized to reduce variance. Minimum variance combinations are computed using quadratic programming. This optimization technique is borrowed from Mathematical Finance where it is called Markowitz Mean-Variance Portfolio Optimization. We test the new method on a number of binary classification problems from the UCI repository using a Support Vector Machine (SVM) as the base-classifier learning algorithm. Our results indicate that the proposed technique can consistently outperform Bagging and can dramatically improve the SVM performance even in cases where the Bagging fails to improve the base-classifier.
Cite
Text
Derbeko et al. "Variance Optimized Bagging." European Conference on Machine Learning, 2002. doi:10.1007/3-540-36755-1_6Markdown
[Derbeko et al. "Variance Optimized Bagging." European Conference on Machine Learning, 2002.](https://mlanthology.org/ecmlpkdd/2002/derbeko2002ecml-variance/) doi:10.1007/3-540-36755-1_6BibTeX
@inproceedings{derbeko2002ecml-variance,
title = {{Variance Optimized Bagging}},
author = {Derbeko, Philip and El-Yaniv, Ran and Meir, Ron},
booktitle = {European Conference on Machine Learning},
year = {2002},
pages = {60-71},
doi = {10.1007/3-540-36755-1_6},
url = {https://mlanthology.org/ecmlpkdd/2002/derbeko2002ecml-variance/}
}