Tighter Sparse Variational Gaussian Processes
Abstract
Sparse variational Gaussian process (GP) approximations based on inducing points have become the de facto standard for scaling GPs to large datasets, owing to their theoretical elegance, computational efficiency, and ease of implementation. This paper introduces a provably tighter variational approximation by relaxing the standard assumption that the conditional approximate posterior given the inducing points must match that in the prior. The key innovation is to modify the conditional posterior to have smaller variances than that of the prior at the training points. We derive the collapsed bound for the regression case, describe how to use the proposed approximation in large data settings, and discuss its application to handle orthogonally structured inducing points and GP latent variable models. Extensive experiments on regression benchmarks, classification, and latent variable models demonstrate that the proposed approximation consistently matches or outperforms standard sparse variational GPs while maintaining the same computational cost.
Cite
Text
Bui et al. "Tighter Sparse Variational Gaussian Processes." Transactions on Machine Learning Research, 2025.Markdown
[Bui et al. "Tighter Sparse Variational Gaussian Processes." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/bui2025tmlr-tighter/)BibTeX
@article{bui2025tmlr-tighter,
title = {{Tighter Sparse Variational Gaussian Processes}},
author = {Bui, Thang D and Ashman, Matthew and Turner, Richard E.},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/bui2025tmlr-tighter/}
}