Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring
Abstract
In this paper we address the following question: "Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?". An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it will sample from a normal approximation of the posterior, while for slow mixing rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic gradients) and as such an efficient optimizer during burn-in.
Cite
Text
Ahn et al. "Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring." International Conference on Machine Learning, 2012.Markdown
[Ahn et al. "Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring." International Conference on Machine Learning, 2012.](https://mlanthology.org/icml/2012/ahn2012icml-bayesian/)BibTeX
@inproceedings{ahn2012icml-bayesian,
title = {{Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring}},
author = {Ahn, Sungjin and Balan, Anoop Korattikara and Welling, Max},
booktitle = {International Conference on Machine Learning},
year = {2012},
url = {https://mlanthology.org/icml/2012/ahn2012icml-bayesian/}
}