Fast Variational Inference for Gaussian Process Models Through KL-Correction

Abstract

Variational inference is a flexible approach to solving problems of intractability in Bayesian models. Unfortunately the convergence of variational methods is often slow. We review a recently suggested variational approach for approximate inference in Gaussian process (GP) models and show how convergence may be dramatically improved through the use of a positive correction term to the standard variational bound. We refer to the modified bound as a KL-corrected bound. The KL-corrected bound is a lower bound on the true likelihood, but an upper bound on the original variational bound. Timing comparisons between optimisation of the two bounds show that optimisation of the new bound consistently improves the speed of convergence.

Cite

Text

King and Lawrence. "Fast Variational Inference for Gaussian Process Models Through KL-Correction." European Conference on Machine Learning, 2006. doi:10.1007/11871842_28

Markdown

[King and Lawrence. "Fast Variational Inference for Gaussian Process Models Through KL-Correction." European Conference on Machine Learning, 2006.](https://mlanthology.org/ecmlpkdd/2006/king2006ecml-fast/) doi:10.1007/11871842_28

BibTeX

@inproceedings{king2006ecml-fast,
  title     = {{Fast Variational Inference for Gaussian Process Models Through KL-Correction}},
  author    = {King, Nathaniel John and Lawrence, Neil D.},
  booktitle = {European Conference on Machine Learning},
  year      = {2006},
  pages     = {270-281},
  doi       = {10.1007/11871842_28},
  url       = {https://mlanthology.org/ecmlpkdd/2006/king2006ecml-fast/}
}