Sparse Algorithms for Markovian Gaussian Processes

Abstract

Approximate Bayesian inference methods that scale to very large datasets are crucial in leveraging probabilistic models for real-world time series. Sparse Markovian Gaussian processes combine the use of inducing variables with efficient Kalman filter-like recursions, resulting in algorithms whose computational and memory requirements scale linearly in the number of inducing points, whilst also enabling parallel parameter updates and stochastic optimisation. Under this paradigm, we derive a general site-based approach to approximate inference, whereby we approximate the non-Gaussian likelihood with local Gaussian terms, called sites. Our approach results in a suite of novel sparse extensions to algorithms from both the machine learning and signal processing literature, including variational inference, expectation propagation, and the classical nonlinear Kalman smoothers. The derived methods are suited to large time series, and we also demonstrate their applicability to spatio-temporal data, where the model has separate inducing points in both time and space.

Cite

Text

Wilkinson et al. "Sparse Algorithms for Markovian Gaussian Processes." Artificial Intelligence and Statistics, 2021.

Markdown

[Wilkinson et al. "Sparse Algorithms for Markovian Gaussian Processes." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/wilkinson2021aistats-sparse/)

BibTeX

@inproceedings{wilkinson2021aistats-sparse,
  title     = {{Sparse Algorithms for Markovian Gaussian Processes}},
  author    = {Wilkinson, William and Solin, Arno and Adam, Vincent},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2021},
  pages     = {1747-1755},
  volume    = {130},
  url       = {https://mlanthology.org/aistats/2021/wilkinson2021aistats-sparse/}
}