Revisiting Agnostic Boosting
Abstract
Boosting is a key method in statistical learning, allowing for converting weak learners into strong ones. While well studied in the realizable case, the statistical properties of weak-to-strong learning remain less understood in the agnostic setting, where there are no assumptions on the distribution of the labels. In this work, we propose a new agnostic boosting algorithm with substantially improved sample complexity compared to prior works under very general assumptions. Our approach is based on a reduction to the realizable case, followed by a margin-based filtering of high-quality hypotheses. Furthermore, we show a nearly-matching lower bound, settling the sample complexity of agnostic boosting up to logarithmic factors.
Cite
Text
da Cunha et al. "Revisiting Agnostic Boosting." Advances in Neural Information Processing Systems, 2025.Markdown
[da Cunha et al. "Revisiting Agnostic Boosting." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/dacunha2025neurips-revisiting/)BibTeX
@inproceedings{dacunha2025neurips-revisiting,
title = {{Revisiting Agnostic Boosting}},
author = {da Cunha, Arthur and Høgsgaard, Mikael Møller and Paudice, Andrea and Sun, Yuxin},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/dacunha2025neurips-revisiting/}
}