Boosting with Diverse Base Classifiers
Abstract
We establish a new bound on the generalization error rate of the Boost-by-Majority algorithm. The bound holds when the algorithm is applied to a collection of base classifiers that contains a “diverse” subset of “good” classifiers, in a precisely defined sense. We describe cross-validation experiments that suggest that Boost-by-Majority can be the basis of a practically useful learning method, often improving on the generalization of AdaBoost on large datasets.
Cite
Text
Dasgupta and Long. "Boosting with Diverse Base Classifiers." Annual Conference on Computational Learning Theory, 2003. doi:10.1007/978-3-540-45167-9_21Markdown
[Dasgupta and Long. "Boosting with Diverse Base Classifiers." Annual Conference on Computational Learning Theory, 2003.](https://mlanthology.org/colt/2003/dasgupta2003colt-boosting/) doi:10.1007/978-3-540-45167-9_21BibTeX
@inproceedings{dasgupta2003colt-boosting,
title = {{Boosting with Diverse Base Classifiers}},
author = {Dasgupta, Sanjoy and Long, Philip M.},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2003},
pages = {273-287},
doi = {10.1007/978-3-540-45167-9_21},
url = {https://mlanthology.org/colt/2003/dasgupta2003colt-boosting/}
}