Markov Chain Score Ascent: A Unifying Framework of Variational Inference with Markovian Gradients

Abstract

Minimizing the inclusive Kullback-Leibler (KL) divergence with stochastic gradient descent (SGD) is challenging since its gradient is defined as an integral over the posterior. Recently, multiple methods have been proposed to run SGD with biased gradient estimates obtained from a Markov chain. This paper provides the first non-asymptotic convergence analysis of these methods by establishing their mixing rate and gradient variance. To do this, we demonstrate that these methods—which we collectively refer to as Markov chain score ascent (MCSA) methods—can be cast as special cases of the Markov chain gradient descent framework. Furthermore, by leveraging this new understanding, we develop a novel MCSA scheme, parallel MCSA (pMCSA), that achieves a tighter bound on the gradient variance. We demonstrate that this improved theoretical result translates to superior empirical performance.

Cite

Text

Kim et al. "Markov Chain Score Ascent: A Unifying Framework of Variational Inference with Markovian Gradients." Neural Information Processing Systems, 2022.

Markdown

[Kim et al. "Markov Chain Score Ascent: A Unifying Framework of Variational Inference with Markovian Gradients." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/kim2022neurips-markov/)

BibTeX

@inproceedings{kim2022neurips-markov,
  title     = {{Markov Chain Score Ascent: A Unifying Framework of Variational Inference with Markovian Gradients}},
  author    = {Kim, Kyurae and Oh, Jisu and Gardner, Jacob and Dieng, Adji Bousso and Kim, Hongseok},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/kim2022neurips-markov/}
}