An Improved Boosting Algorithm and Its Implications on Learning Complexity
Abstract
In this work we present some improvements and extensions to previous work on boosting weak learners [Sch90, Fre90]. Our main result is an improvement of the boosting-by-majority algorithm. One implication of the performance of this algorithm is that if a concept class can be learned in the PAC model to within some fixed error smaller than 1/2, then it can be learned to within an arbitrarily small error ε > 0 with time complexity 0((1/ε)(log 1/ε)2) (fixing the sample space and concept class and the required reliability). We show that the majority rule is the optimal rule for combining general weak learners. We also extend the boosting algorithm to concept classes that give multi-valued labels and real-valued labels.
Cite
Text
Freund. "An Improved Boosting Algorithm and Its Implications on Learning Complexity." Annual Conference on Computational Learning Theory, 1992. doi:10.1145/130385.130429Markdown
[Freund. "An Improved Boosting Algorithm and Its Implications on Learning Complexity." Annual Conference on Computational Learning Theory, 1992.](https://mlanthology.org/colt/1992/freund1992colt-improved/) doi:10.1145/130385.130429BibTeX
@inproceedings{freund1992colt-improved,
title = {{An Improved Boosting Algorithm and Its Implications on Learning Complexity}},
author = {Freund, Yoav},
booktitle = {Annual Conference on Computational Learning Theory},
year = {1992},
pages = {391-398},
doi = {10.1145/130385.130429},
url = {https://mlanthology.org/colt/1992/freund1992colt-improved/}
}