Automated Variational Inference for Gaussian Process Models

Abstract

We develop an automated variational method for approximate inference in Gaussian process (GP) models whose posteriors are often intractable. Using a mixture of Gaussians as the variational distribution, we show that (i) the variational objective and its gradients can be approximated efficiently via sampling from univariate Gaussian distributions and (ii) the gradients of the GP hyperparameters can be obtained analytically regardless of the model likelihood. We further propose two instances of the variational distribution whose covariance matrices can be parametrized linearly in the number of observations. These results allow gradient-based optimization to be done efficiently in a black-box manner. Our approach is thoroughly verified on 5 models using 6 benchmark datasets, performing as well as the exact or hard-coded implementations while running orders of magnitude faster than the alternative MCMC sampling approaches. Our method can be a valuable tool for practitioners and researchers to investigate new models with minimal effort in deriving model-specific inference algorithms.

Cite

Text

Nguyen and Bonilla. "Automated Variational Inference for Gaussian Process Models." Neural Information Processing Systems, 2014.

Markdown

[Nguyen and Bonilla. "Automated Variational Inference for Gaussian Process Models." Neural Information Processing Systems, 2014.](https://mlanthology.org/neurips/2014/nguyen2014neurips-automated/)

BibTeX

@inproceedings{nguyen2014neurips-automated,
  title     = {{Automated Variational Inference for Gaussian Process Models}},
  author    = {Nguyen, Trung V and Bonilla, Edwin V.},
  booktitle = {Neural Information Processing Systems},
  year      = {2014},
  pages     = {1404-1412},
  url       = {https://mlanthology.org/neurips/2014/nguyen2014neurips-automated/}
}