Revisiting Active Sets for Gaussian Process Decoders

Abstract

Decoders built on Gaussian processes (GPs) are enticing due to the marginalisation over the non-linear function space. Such models (also known as GP-LVMs) are often expensive and notoriously difficult to train in practice, but can be scaled using variational inference and inducing points. In this paper, we revisit active set approximations. We develop a new stochastic estimate of the log-marginal likelihood based on recently discovered links to cross-validation, and we propose a computationally efficient approximation thereof. We demonstrate that the resulting stochastic active sets (SAS) approximation significantly improves the robustness of GP decoder training, while reducing computational cost. The SAS-GP obtains more structure in the latent space, scales to many datapoints, and learns better representations than variational autoencoders, which is rarely the case for GP decoders.

Cite

Text

Moreno-Muñoz et al. "Revisiting Active Sets for Gaussian Process Decoders." Neural Information Processing Systems, 2022.

Markdown

[Moreno-Muñoz et al. "Revisiting Active Sets for Gaussian Process Decoders." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/morenomunoz2022neurips-revisiting/)

BibTeX

@inproceedings{morenomunoz2022neurips-revisiting,
  title     = {{Revisiting Active Sets for Gaussian Process Decoders}},
  author    = {Moreno-Muñoz, Pablo and Feldager, Cilie and Hauberg, Søren},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/morenomunoz2022neurips-revisiting/}
}