Further Explanation of the Effectiveness of Voting Methods: The Game Between Margins and Weights
Abstract
In this paper we present new bounds on the generalization error of a classifier f constructed as a convex combination of base classifiers from the class H . The algorithms of combining simple classifiers into a complex one, such as boosting and bagging, have attracted a lot of attention. We obtain new sharper bounds on the generalization error of combined classifiers that take into account both the empirical distribution of “classification margins” and the “approximate dimension” of the classifier, which is defined in terms of weights assigned to base classifiers by a voting algorithm. We study the performance of these bounds in several experiments with learning algorithms.
Cite
Text
Koltchinskii et al. "Further Explanation of the Effectiveness of Voting Methods: The Game Between Margins and Weights." Annual Conference on Computational Learning Theory, 2001. doi:10.1007/3-540-44581-1_16Markdown
[Koltchinskii et al. "Further Explanation of the Effectiveness of Voting Methods: The Game Between Margins and Weights." Annual Conference on Computational Learning Theory, 2001.](https://mlanthology.org/colt/2001/koltchinskii2001colt-further/) doi:10.1007/3-540-44581-1_16BibTeX
@inproceedings{koltchinskii2001colt-further,
title = {{Further Explanation of the Effectiveness of Voting Methods: The Game Between Margins and Weights}},
author = {Koltchinskii, Vladimir and Panchenko, Dmitriy and Lozano, Fernando},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2001},
pages = {241-255},
doi = {10.1007/3-540-44581-1_16},
url = {https://mlanthology.org/colt/2001/koltchinskii2001colt-further/}
}