MCMC for Variationally Sparse Gaussian Processes

Abstract

Gaussian process (GP) models form a core part of probabilistic machine learning. Considerable research effort has been made into attacking three issues with GP models: how to compute efficiently when the number of data is large; how to approximate the posterior when the likelihood is not Gaussian and how to estimate covariance function parameter posteriors. This paper simultaneously addresses these, using a variational approximation to the posterior which is sparse in sup- port of the function but otherwise free-form. The result is a Hybrid Monte-Carlo sampling scheme which allows for a non-Gaussian approximation over the function values and covariance parameters simultaneously, with efficient computations based on inducing-point sparse GPs.

Cite

Text

Hensman et al. "MCMC for Variationally Sparse Gaussian Processes." Neural Information Processing Systems, 2015.

Markdown

[Hensman et al. "MCMC for Variationally Sparse Gaussian Processes." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/hensman2015neurips-mcmc/)

BibTeX

@inproceedings{hensman2015neurips-mcmc,
  title     = {{MCMC for Variationally Sparse Gaussian Processes}},
  author    = {Hensman, James and Matthews, Alexander G and Filippone, Maurizio and Ghahramani, Zoubin},
  booktitle = {Neural Information Processing Systems},
  year      = {2015},
  pages     = {1648-1656},
  url       = {https://mlanthology.org/neurips/2015/hensman2015neurips-mcmc/}
}