A Simple Multi-Class Boosting Framework with Theoretical Guarantees and Empirical Proficiency

Abstract

There is a need for simple yet accurate white-box learning systems that train quickly and with little data. To this end, we showcase REBEL, a multi-class boosting method, and present a novel family of weak learners called localized similarities. Our framework provably minimizes the training error of any dataset at an exponential rate. We carry out experiments on a variety of synthetic and real datasets, demonstrating a consistent tendency to avoid overfitting. We evaluate our method on MNIST and standard UCI datasets against other state-of-the-art methods, showing the empirical proficiency of our method.

Cite

Text

Appel and Perona. "A Simple Multi-Class Boosting Framework with Theoretical Guarantees and Empirical Proficiency." International Conference on Machine Learning, 2017.

Markdown

[Appel and Perona. "A Simple Multi-Class Boosting Framework with Theoretical Guarantees and Empirical Proficiency." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/appel2017icml-simple/)

BibTeX

@inproceedings{appel2017icml-simple,
  title     = {{A Simple Multi-Class Boosting Framework with Theoretical Guarantees and Empirical Proficiency}},
  author    = {Appel, Ron and Perona, Pietro},
  booktitle = {International Conference on Machine Learning},
  year      = {2017},
  pages     = {186-194},
  volume    = {70},
  url       = {https://mlanthology.org/icml/2017/appel2017icml-simple/}
}