Approximate Uncertainty Propagation for Continuous Gaussian Process Dynamical Systems

Abstract

When learning continuous dynamical systems with Gaussian Processes, computing trajectories requires repeatedly mapping the distributions of uncertain states through the distribution of learned nonlinear functions, which is generally intractable. Since sampling-based approaches are computationally expensive, we consider approximations of the output and trajectory distributions. We show that existing methods make an incorrect implicit independence assumption and underestimate the model-induced uncertainty. We propose a piecewise linear approximation of the GP model yielding a class of numerical solvers for efficient uncertainty estimates matching sampling-based methods.

Cite

Text

Ridderbusch et al. "Approximate Uncertainty Propagation for Continuous Gaussian Process Dynamical Systems." NeurIPS 2022 Workshops: CDS, 2022.

Markdown

[Ridderbusch et al. "Approximate Uncertainty Propagation for Continuous Gaussian Process Dynamical Systems." NeurIPS 2022 Workshops: CDS, 2022.](https://mlanthology.org/neuripsw/2022/ridderbusch2022neuripsw-approximate/)

BibTeX

@inproceedings{ridderbusch2022neuripsw-approximate,
  title     = {{Approximate Uncertainty Propagation for Continuous Gaussian Process Dynamical Systems}},
  author    = {Ridderbusch, Steffen and Ober-Blöbaum, Sina and Goulart, Paul James},
  booktitle = {NeurIPS 2022 Workshops: CDS},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/ridderbusch2022neuripsw-approximate/}
}