Deep Kernel Learning of Nonlinear Latent Force Models
Abstract
Scientific processes are often modelled by sets of differential equations. As datasets grow, individually fitting these models and quantifying their uncertainties becomes a computationally challenging task. Latent force models offer a mathematically-grounded balance between data-driven and mechanistic inference in such dynamical systems, whilst accounting for stochasticity in observations and parameters. However, the required derivation and computation of the posterior kernel terms over a low-dimensional latent force is rarely tractable, requiring approximations for complex scenarios such as nonlinear dynamics. In this paper, we overcome this issue by posing the problem as learning the solution operator itself to a class of latent force models, thereby improving the scalability of these models. This is achieved by employing a deep kernel along with a meta-learned embedding of the output functions. Finally, we demonstrate the ability to extrapolate a solution operator trained on simulations to real experimental datasets, as well as scaling to large datasets.
Cite
Text
Moss et al. "Deep Kernel Learning of Nonlinear Latent Force Models." Transactions on Machine Learning Research, 2024.Markdown
[Moss et al. "Deep Kernel Learning of Nonlinear Latent Force Models." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/moss2024tmlr-deep/)BibTeX
@article{moss2024tmlr-deep,
title = {{Deep Kernel Learning of Nonlinear Latent Force Models}},
author = {Moss, Jacob and England, Jeremy and Lio, Pietro},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/moss2024tmlr-deep/}
}