Online Bounds for Bayesian Algorithms
Abstract
We present a competitive analysis of Bayesian learning algorithms in the online learning setting and show that many simple Bayesian algorithms (such as Gaussian linear regression and Bayesian logistic regression) per- form favorably when compared, in retrospect, to the single best model in the model class. The analysis does not assume that the Bayesian algo- rithms’ modeling assumptions are “correct,” and our bounds hold even if the data is adversarially chosen. For Gaussian linear regression (us- ing logloss), our error bounds are comparable to the best bounds in the online learning literature, and we also provide a lower bound showing that Gaussian linear regression is optimal in a certain worst case sense. We also give bounds for some widely used maximum a posteriori (MAP) estimation algorithms, including regularized logistic regression.
Cite
Text
Kakade and Ng. "Online Bounds for Bayesian Algorithms." Neural Information Processing Systems, 2004.Markdown
[Kakade and Ng. "Online Bounds for Bayesian Algorithms." Neural Information Processing Systems, 2004.](https://mlanthology.org/neurips/2004/kakade2004neurips-online/)BibTeX
@inproceedings{kakade2004neurips-online,
title = {{Online Bounds for Bayesian Algorithms}},
author = {Kakade, Sham M. and Ng, Andrew Y.},
booktitle = {Neural Information Processing Systems},
year = {2004},
pages = {641-648},
url = {https://mlanthology.org/neurips/2004/kakade2004neurips-online/}
}