Improper Multiclass Boosting
Abstract
We study the setting of multiclass boosting with a possibly large number of classes. A recent work by Brukhim, Hazan, Moran, and Schapire, 2021, proved a hardness result for a large class of natural boosting algorithms we call proper. These algorithms output predictors that correspond to a plurality-vote aggregation of weak hypotheses. In particular, they showed that proper boosting algorithms must incur a large cost that scales with the number of classes.In this work we propose an efficient improper multiclass boosting algorithm that circumvents this hardness result. A key component of our algorithm is based on the technique of list learning. In list learning, instead of predicting a single outcome for a given unseen input, the goal is to provide a short menu of predictions. The resulting boosting algorithm has sample and oracle complexity bounds that are entirely independent of the number of classes.A corollary of the above is that plurality-vote over a learnable class is also learnable. We complement this result by showing that other simple aggregations over hypotheses from a learnable class do not preserve learnability, unlike in the binary setting.
Cite
Text
Brukhim et al. "Improper Multiclass Boosting." Conference on Learning Theory, 2023.Markdown
[Brukhim et al. "Improper Multiclass Boosting." Conference on Learning Theory, 2023.](https://mlanthology.org/colt/2023/brukhim2023colt-improper/)BibTeX
@inproceedings{brukhim2023colt-improper,
title = {{Improper Multiclass Boosting}},
author = {Brukhim, Nataly and Hanneke, Steve and Moran, Shay},
booktitle = {Conference on Learning Theory},
year = {2023},
pages = {5433-5452},
volume = {195},
url = {https://mlanthology.org/colt/2023/brukhim2023colt-improper/}
}