Invariance-Based Learning of Latent Dynamics

Abstract

We propose a new model class aimed at predicting dynamical trajectories from high-dimensional empirical data. This is done by combining variational autoencoders and (spatio-)temporal transformers within a framework designed to enforce certain scientifically-motivated invariances. The models allow inference of system behavior at any continuous time and generalization well beyond the data distributions seen during training. Furthermore, the models do not require an explicit neural ODE formulation, making them efficient and highly scalable in practice. We study behavior through simple theoretical analyses and extensive empirical experiments. The latter investigate the ability to predict the trajectories of complicated systems based on finite data and show that the proposed approaches can outperform existing neural-dynamical models. We study also more general inductive bias in the context of transfer to data obtained under entirely novel system interventions. Overall, our results provide a new framework for efficiently learning complicated dynamics in a data-driven manner, with potential applications in a wide range of fields including physics, biology, and engineering.

Cite

Text

Lagemann et al. "Invariance-Based Learning of Latent Dynamics." International Conference on Learning Representations, 2024.

Markdown

[Lagemann et al. "Invariance-Based Learning of Latent Dynamics." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/lagemann2024iclr-invariancebased/)

BibTeX

@inproceedings{lagemann2024iclr-invariancebased,
  title     = {{Invariance-Based Learning of Latent Dynamics}},
  author    = {Lagemann, Kai and Lagemann, Christian and Mukherjee, Sach},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/lagemann2024iclr-invariancebased/}
}