Towards a Theoretical Framework for Ensemble Classification
Abstract
Ensemble learning schemes such as AdaBoost and Bagging enhance the performance of a single clas-sifier by combining predictions from multiple clas-sifiers of the same type. The predictions from an ensemble of diverse classifiers can be combined in related ways, e.g. by voting or simply by se-lecting the best classifier via cross-validation – a technique widely used in machine learning. How-ever, since no ensemble scheme is always the best choice, a deeper insight into the structure of mean-ingful approaches to combine predictions is needed to achieve further progress. In this paper we offer an operational reformulation of common ensemble learning schemes – Voting, Selection by Crossvali-dation (X-Val), Grading and Bagging – as a Stacking scheme with appropriate parameter settings. Thus, from a theoretical point of view all these schemes can be reduced to Stacking with an appropriate combination method. This result is an important step towards a general theoretical framework for the field of ensemble learning. 1
Cite
Text
Seewald. "Towards a Theoretical Framework for Ensemble Classification." International Joint Conference on Artificial Intelligence, 2003.Markdown
[Seewald. "Towards a Theoretical Framework for Ensemble Classification." International Joint Conference on Artificial Intelligence, 2003.](https://mlanthology.org/ijcai/2003/seewald2003ijcai-theoretical/)BibTeX
@inproceedings{seewald2003ijcai-theoretical,
title = {{Towards a Theoretical Framework for Ensemble Classification}},
author = {Seewald, Alexander K.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2003},
pages = {1443-1444},
url = {https://mlanthology.org/ijcai/2003/seewald2003ijcai-theoretical/}
}