A Fixed-Point Operator for Inference in Variational Bayesian Latent Gaussian Models

Abstract

Latent Gaussian Models (LGM) provide a rich modeling framework with general inference procedures. The variational approximation offers an effective solution for such models and has attracted a significant amount of interest. Recent work proposed a fixed-point (FP) update procedure to optimize the covariance matrix in the variational solution and demonstrated its efficacy in specific models. The paper makes three contributions. First, it shows that the same approach can be used more generally in extensions of LGM. Second, it provides an analysis identifying conditions for the convergence of the FP method. Third, it provides an extensive experimental evaluation in Gaussian processes, sparse Gaussian processes, and generalized linear models, with several non-conjugate observation likelihoods, showing wide applicability of the FP method and a significant advantage over gradient based optimization.

Cite

Text

Sheth and Khardon. "A Fixed-Point Operator for Inference in Variational Bayesian Latent Gaussian Models." International Conference on Artificial Intelligence and Statistics, 2016.

Markdown

[Sheth and Khardon. "A Fixed-Point Operator for Inference in Variational Bayesian Latent Gaussian Models." International Conference on Artificial Intelligence and Statistics, 2016.](https://mlanthology.org/aistats/2016/sheth2016aistats-fixed/)

BibTeX

@inproceedings{sheth2016aistats-fixed,
  title     = {{A Fixed-Point Operator for Inference in Variational Bayesian Latent Gaussian Models}},
  author    = {Sheth, Rishit and Khardon, Roni},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2016},
  pages     = {761-769},
  url       = {https://mlanthology.org/aistats/2016/sheth2016aistats-fixed/}
}