Scalable Training of Inference Networks for Gaussian-Process Models

Abstract

Inference in Gaussian process (GP) models is computationally challenging for large data, and often difficult to approximate with a small number of inducing points. We explore an alternative approximation that employs stochastic inference networks for a flexible inference. Unfortunately, for such networks, minibatch training is difficult to be able to learn meaningful correlations over function outputs for a large dataset. We propose an algorithm that enables such training by tracking a stochastic, functional mirror-descent algorithm. At each iteration, this only requires considering a finite number of input locations, resulting in a scalable and easy-to-implement algorithm. Empirical results show comparable and, sometimes, superior performance to existing sparse variational GP methods.

Cite

Text

Shi et al. "Scalable Training of Inference Networks for Gaussian-Process Models." International Conference on Machine Learning, 2019.

Markdown

[Shi et al. "Scalable Training of Inference Networks for Gaussian-Process Models." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/shi2019icml-scalable/)

BibTeX

@inproceedings{shi2019icml-scalable,
  title     = {{Scalable Training of Inference Networks for Gaussian-Process Models}},
  author    = {Shi, Jiaxin and Khan, Mohammad Emtiyaz and Zhu, Jun},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {5758-5768},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/shi2019icml-scalable/}
}