A Bayesian Framework for Online Classifier Ensemble
Abstract
We propose a Bayesian framework for recursively estimating the classifier weights in online learning of a classifier ensemble. In contrast with past methods, such as stochastic gradient descent or online boosting, our framework estimates the weights in terms of evolving posterior distributions. For a specified class of loss functions, we show that it is possible to formulate a suitably defined likelihood function and hence use the posterior distribution as an approximation to the global empirical loss minimizer. If the stream of training data is sampled from a stationary process, we can also show that our framework admits a superior rate of convergence to the expected loss minimizer than is possible with standard stochastic gradient descent. In experiments with real-world datasets, our formulation often performs better than online boosting algorithms.
Cite
Text
Bai et al. "A Bayesian Framework for Online Classifier Ensemble." International Conference on Machine Learning, 2014.Markdown
[Bai et al. "A Bayesian Framework for Online Classifier Ensemble." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/bai2014icml-bayesian/)BibTeX
@inproceedings{bai2014icml-bayesian,
title = {{A Bayesian Framework for Online Classifier Ensemble}},
author = {Bai, Qinxun and Lam, Henry and Sclaroff, Stan},
booktitle = {International Conference on Machine Learning},
year = {2014},
pages = {1584-1592},
volume = {32},
url = {https://mlanthology.org/icml/2014/bai2014icml-bayesian/}
}