Learning a Priori Constrained Weighted Majority Votes

Abstract

Weighted majority votes allow one to combine the output of several classifiers or voters. MinCq is a recent algorithm for optimizing the weight of each voter based on the minimization of a theoretical bound over the risk of the vote with elegant PAC-Bayesian generalization guarantees. However, while it has demonstrated good performance when combining weak classifiers, MinCq cannot make use of the useful a priori knowledge that one may have when using a mixture of weak and strong voters. In this paper, we propose P-MinCq, an extension of MinCq that can incorporate such knowledge in the form of a constraint over the distribution of the weights, along with general proofs of convergence that stand in the sample compression setting for data-dependent voters. The approach is applied to a vote of $k$ k -NN classifiers with a specific modeling of the voters’ performance. P-MinCq significantly outperforms the classic $k$ k -NN classifier, a symmetric NN and MinCq using the same voters. We show that it is also competitive with LMNN, a popular metric learning algorithm, and that combining both approaches further reduces the error.

Cite

Text

Bellet et al. "Learning a Priori Constrained Weighted Majority Votes." Machine Learning, 2014. doi:10.1007/S10994-014-5462-Z

Markdown

[Bellet et al. "Learning a Priori Constrained Weighted Majority Votes." Machine Learning, 2014.](https://mlanthology.org/mlj/2014/bellet2014mlj-learning/) doi:10.1007/S10994-014-5462-Z

BibTeX

@article{bellet2014mlj-learning,
  title     = {{Learning a Priori Constrained Weighted Majority Votes}},
  author    = {Bellet, Aurélien and Habrard, Amaury and Morvant, Emilie and Sebban, Marc},
  journal   = {Machine Learning},
  year      = {2014},
  pages     = {129-154},
  doi       = {10.1007/S10994-014-5462-Z},
  volume    = {97},
  url       = {https://mlanthology.org/mlj/2014/bellet2014mlj-learning/}
}