Boosting, Voting Classifiers and Randomized Sample Compression Schemes
Abstract
In Boosting, we aim to leverage multiple weak learners to produce a strong learner. At the center of this paradigm lies the concept of building the strong learner as a voting classifier, which outputs a weighted majority vote of the weak learners. While many successful Boosting algorithms, such as the iconic AdaBoost, produce voting classifiers, their theoretical performance has long remained sub-optimal: The best known bounds on the number of training examples necessary for a voting classifier to obtain a given accuracy has so far always contained at least two logarithmic factors above what is known to be achievable by general weak-to-strong learners. In this work, we break this barrier by proposing a randomized Boosting algorithm that outputs voting classifiers whose generalization error contains a single logarithmic dependency on the sample size. We obtain this result by building a general framework that extends sample compression methods to support randomized learning algorithms based on sub-sampling.
Cite
Text
Cunha et al. "Boosting, Voting Classifiers and Randomized Sample Compression Schemes." Proceedings of The 36th International Conference on Algorithmic Learning Theory, 2025.Markdown
[Cunha et al. "Boosting, Voting Classifiers and Randomized Sample Compression Schemes." Proceedings of The 36th International Conference on Algorithmic Learning Theory, 2025.](https://mlanthology.org/alt/2025/cunha2025alt-boosting/)BibTeX
@inproceedings{cunha2025alt-boosting,
title = {{Boosting, Voting Classifiers and Randomized Sample Compression Schemes}},
author = {Cunha, Arthur and Larsen, Kasper Green and Ritzert, Martin},
booktitle = {Proceedings of The 36th International Conference on Algorithmic Learning Theory},
year = {2025},
pages = {390-404},
volume = {272},
url = {https://mlanthology.org/alt/2025/cunha2025alt-boosting/}
}