A Consistent Strategy for Boosting Algorithms
Abstract
The probability of error of classification methods based on convex combinations of simple base classifiers by “boosting” algorithms is investigated. The main result of the paper is that certain regularized boosting algorithms provide Bayes-risk consistent classifiers under the only assumption that the Bayes classifier may be approximated by a convex combination of the base classifiers. Non-asymptotic distribution-free bounds are also developed which offer interesting new insight into how boosting works and help explain their success in practical classification problems.
Cite
Text
Lugosi and Vayatis. "A Consistent Strategy for Boosting Algorithms." Annual Conference on Computational Learning Theory, 2002. doi:10.1007/3-540-45435-7_21Markdown
[Lugosi and Vayatis. "A Consistent Strategy for Boosting Algorithms." Annual Conference on Computational Learning Theory, 2002.](https://mlanthology.org/colt/2002/lugosi2002colt-consistent/) doi:10.1007/3-540-45435-7_21BibTeX
@inproceedings{lugosi2002colt-consistent,
title = {{A Consistent Strategy for Boosting Algorithms}},
author = {Lugosi, Gábor and Vayatis, Nicolas},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2002},
pages = {303-318},
doi = {10.1007/3-540-45435-7_21},
url = {https://mlanthology.org/colt/2002/lugosi2002colt-consistent/}
}