From PAC-Bayes Bounds to Quadratic Programs for Majority Votes
Abstract
We propose to construct a weighted majority vote on a set of basis functions by minimizing a risk bound (called the C-bound) that depends on the first two moments of the margin of the Q-convex combination realized on the training data. This bound minimization algorithm turns out to be a quadratic program that can be efficiently solved. A first version of the algorithm is designed for the supervised inductive setting and turns out to be competitive with AdaBoost, MDBoost and the SVM. The second version of the algorithm, designed for the transductive setting, competes well with TSVM. We also propose a new PAC-Bayes theorem that bounds the difference between the "true" value of the C-bound and its empirical estimate and that, unexpectedly, contains no KL-divergence.
Cite
Text
Roy et al. "From PAC-Bayes Bounds to Quadratic Programs for Majority Votes." International Conference on Machine Learning, 2011.Markdown
[Roy et al. "From PAC-Bayes Bounds to Quadratic Programs for Majority Votes." International Conference on Machine Learning, 2011.](https://mlanthology.org/icml/2011/roy2011icml-pac/)BibTeX
@inproceedings{roy2011icml-pac,
title = {{From PAC-Bayes Bounds to Quadratic Programs for Majority Votes}},
author = {Roy, Jean-Francis and Laviolette, François and Marchand, Mario},
booktitle = {International Conference on Machine Learning},
year = {2011},
pages = {649-656},
url = {https://mlanthology.org/icml/2011/roy2011icml-pac/}
}