On Margins and Generalisation for Voting Classifiers
Abstract
We study the generalisation properties of majority voting on finite ensembles of classifiers, proving margin-based generalisation bounds via the PAC-Bayes theory. These provide state-of-the-art guarantees on a number of classification tasks. Our central results leverage the Dirichlet posteriors studied recently by Zantedeschi et al. (2021) for training voting classifiers; in contrast to that work our bounds apply to non-randomised votes via the use of margins. Our contributions add perspective to the debate on the ``margins theory'' proposed by Schapire et al. (1998) for the generalisation of ensemble classifiers.
Cite
Text
Biggs et al. "On Margins and Generalisation for Voting Classifiers." Neural Information Processing Systems, 2022.Markdown
[Biggs et al. "On Margins and Generalisation for Voting Classifiers." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/biggs2022neurips-margins/)BibTeX
@inproceedings{biggs2022neurips-margins,
title = {{On Margins and Generalisation for Voting Classifiers}},
author = {Biggs, Felix and Zantedeschi, Valentina and Guedj, Benjamin},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/biggs2022neurips-margins/}
}