Scalable Exact Inference in Multi-Output Gaussian Processes
Abstract
Multi-output Gaussian processes (MOGPs) leverage the flexibility and interpretability of GPs while capturing structure across outputs, which is desirable, for example, in spatio-temporal modelling. The key problem with MOGPs is their computational scaling $O(n^3 p^3)$, which is cubic in the number of both inputs $n$ (e.g., time points or locations) and outputs $p$. For this reason, a popular class of MOGPs assumes that the data live around a low-dimensional linear subspace, reducing the complexity to $O(n^3 m^3)$. However, this cost is still cubic in the dimensionality of the subspace $m$, which is still prohibitively expensive for many applications. We propose the use of a sufficient statistic of the data to accelerate inference and learning in MOGPs with orthogonal bases. The method achieves linear scaling in $m$ in practice, allowing these models to scale to large $m$ without sacrificing significant expressivity or requiring approximation. This advance opens up a wide range of real-world tasks and can be combined with existing GP approximations in a plug-and-play way. We demonstrate the efficacy of the method on various synthetic and real-world data sets.
Cite
Text
Bruinsma et al. "Scalable Exact Inference in Multi-Output Gaussian Processes." International Conference on Machine Learning, 2020.Markdown
[Bruinsma et al. "Scalable Exact Inference in Multi-Output Gaussian Processes." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/bruinsma2020icml-scalable/)BibTeX
@inproceedings{bruinsma2020icml-scalable,
title = {{Scalable Exact Inference in Multi-Output Gaussian Processes}},
author = {Bruinsma, Wessel and Perim, Eric and Tebbutt, William and Hosking, Scott and Solin, Arno and Turner, Richard},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {1190-1201},
volume = {119},
url = {https://mlanthology.org/icml/2020/bruinsma2020icml-scalable/}
}