Improving Algorithms for Boosting

Abstract

Motivated by results in information-theory, we describe a modification of the popular boosting algorithm AdaBoost and assess its performance both theoretically and empirically. We provide theoretical and empirical evidence that the proposed boosting scheme will have lower training and testing error than the original (non- confidence-rated) version of AdaBoost. Our modified boosting algorithm and its analysis also suggests an explanation for why boosting with confidence-rated predictions often markedly outperforms boosting without confidence-rated predictions. Finally, our motivations and analyses provide further impetus for the study of boosting in an information-theoretic, as opposed to decision-theoretic, light. 1 Introduction Boosting is a mechanism for training a sequence of "weak" learners and combining the hypotheses generated by these weak learners so as to obtain an aggregate hypothesis which is highly accurate. One of the most popular and widely studied [11, 5...

Cite

Text

Aslam. "Improving Algorithms for Boosting." Annual Conference on Computational Learning Theory, 2000.

Markdown

[Aslam. "Improving Algorithms for Boosting." Annual Conference on Computational Learning Theory, 2000.](https://mlanthology.org/colt/2000/aslam2000colt-improving/)

BibTeX

@inproceedings{aslam2000colt-improving,
  title     = {{Improving Algorithms for Boosting}},
  author    = {Aslam, Javed A.},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2000},
  pages     = {200-207},
  url       = {https://mlanthology.org/colt/2000/aslam2000colt-improving/}
}