Inverse-Free Sparse Variational Gaussian Processes
Abstract
Gaussian processes (GPs) are a powerful prior over functions, but they require to invert/decompose the kernel matrix to perform inference, making them poorly suited to modern hardware. To address this, variational bounds that require only matmuls by introducing an additional variational parameter $\mathbf T \in \mathbb{R}^{M\times M}$ were proposed. However, in practice, the optimisation of $\mathbf{T}$ with typical deep learning optimisers is challenging, limiting the practical utility of these bounds. In this work, we solve this by introducing a preconditioner for a variational parameter in the bound, a tailored update for $\mathbf T$ based on natural gradients, and a stopping criterion to determine the number of updates. This leads to an inverse-free method on-par with existing approaches on an iteration basis, with low-precision computation and wall-clock speedups being the next step.
Cite
Text
Cortinovis et al. "Inverse-Free Sparse Variational Gaussian Processes." NeurIPS 2024 Workshops: BDU, 2024.Markdown
[Cortinovis et al. "Inverse-Free Sparse Variational Gaussian Processes." NeurIPS 2024 Workshops: BDU, 2024.](https://mlanthology.org/neuripsw/2024/cortinovis2024neuripsw-inversefree/)BibTeX
@inproceedings{cortinovis2024neuripsw-inversefree,
title = {{Inverse-Free Sparse Variational Gaussian Processes}},
author = {Cortinovis, Stefano and Aitchison, Laurence and Hensman, James and Eleftheriadis, Stefanos and van der Wilk, Mark},
booktitle = {NeurIPS 2024 Workshops: BDU},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/cortinovis2024neuripsw-inversefree/}
}