AdaBoost Is Consistent
Abstract
The risk, or probability of error, of the classifier produced by the AdaBoost algorithm is investigated. In particular, we consider the stopping strategy to be used in AdaBoost to achieve universal consistency. We show that provided AdaBoost is stopped after n iterations--for sample size n and < 1--the sequence of risks of the classifiers it produces approaches the Bayes risk if Bayes risk L > 0.
Cite
Text
Bartlett and Traskin. "AdaBoost Is Consistent." Neural Information Processing Systems, 2006.Markdown
[Bartlett and Traskin. "AdaBoost Is Consistent." Neural Information Processing Systems, 2006.](https://mlanthology.org/neurips/2006/bartlett2006neurips-adaboost/)BibTeX
@inproceedings{bartlett2006neurips-adaboost,
title = {{AdaBoost Is Consistent}},
author = {Bartlett, Peter L. and Traskin, Mikhail},
booktitle = {Neural Information Processing Systems},
year = {2006},
pages = {105-112},
url = {https://mlanthology.org/neurips/2006/bartlett2006neurips-adaboost/}
}