A Riemannian Framework for Learning Reduced-Order Lagrangian Dynamics
Abstract
By incorporating physical consistency as inductive bias, deep neural networks display increased generalization capabilities and data efficiency in learning nonlinear dynamic models. However, the complexity of these models generally increases with the system dimensionality, requiring larger datasets, more complex deep networks, and significant computational effort. We propose a novel geometric network architecture to learn physically-consistent reduced-order dynamic parameters that accurately describe the original high-dimensional system behavior. This is achieved by building on recent advances in model-order reduction and by adopting a Riemannian perspective to jointly learn a non-linear structure-preserving latent space and the associated low-dimensional dynamics. Our approach enables accurate long-term predictions of the high-dimensional dynamics of rigid and deformable systems with increased data efficiency by inferring interpretable and physically-plausible reduced Lagrangian models.
Cite
Text
Friedl et al. "A Riemannian Framework for Learning Reduced-Order Lagrangian Dynamics." International Conference on Learning Representations, 2025.Markdown
[Friedl et al. "A Riemannian Framework for Learning Reduced-Order Lagrangian Dynamics." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/friedl2025iclr-riemannian/)BibTeX
@inproceedings{friedl2025iclr-riemannian,
title = {{A Riemannian Framework for Learning Reduced-Order Lagrangian Dynamics}},
author = {Friedl, Katharina and Jaquier, Noémie and Lundell, Jens and Asfour, Tamim and Kragic, Danica},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/friedl2025iclr-riemannian/}
}