Minimum Majority Classification and Boosting

Abstract

Motivated by a theoretical analysis of the generalization of boosting, we examine learning algorithms that work by trying to fit data using a simple majority vote over a small number of a collection of hypotheses. We provide experimental evidence that an algorithm based on this principle outputs hypotheses that often generalize nearly as well as those output by boosting, and sometimes better. We also provide experimental evidence for an additional reason that boosting algorithms generalize well, that they take advantage of cases in which there are many simple hypotheses with independent errors.

Cite

Text

Long. "Minimum Majority Classification and Boosting." AAAI Conference on Artificial Intelligence, 2002. doi:10.5555/777092.777123

Markdown

[Long. "Minimum Majority Classification and Boosting." AAAI Conference on Artificial Intelligence, 2002.](https://mlanthology.org/aaai/2002/long2002aaai-minimum/) doi:10.5555/777092.777123

BibTeX

@inproceedings{long2002aaai-minimum,
  title     = {{Minimum Majority Classification and Boosting}},
  author    = {Long, Philip M.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2002},
  pages     = {181-186},
  doi       = {10.5555/777092.777123},
  url       = {https://mlanthology.org/aaai/2002/long2002aaai-minimum/}
}