Neural Modes: Self-Supervised Learning of Nonlinear Modal Subspaces
Abstract
We propose a self-supervised approach for learning physics-based subspaces for real-time simulation. Existing learning-based methods construct subspaces by approximating pre-defined simulation data in a purely geometric way. However this approach tends to produce high-energy configurations leads to entangled latent space dimensions and generalizes poorly beyond the training set. To overcome these limitations we propose a self-supervised approach that directly minimizes the system's mechanical energy during training. We show that our method leads to learned subspaces that reflect physical equilibrium constraints resolve overfitting issues of previous methods and offer interpretable latent space parameters.
Cite
Text
Wang et al. "Neural Modes: Self-Supervised Learning of Nonlinear Modal Subspaces." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.02185Markdown
[Wang et al. "Neural Modes: Self-Supervised Learning of Nonlinear Modal Subspaces." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/wang2024cvpr-neural/) doi:10.1109/CVPR52733.2024.02185BibTeX
@inproceedings{wang2024cvpr-neural,
title = {{Neural Modes: Self-Supervised Learning of Nonlinear Modal Subspaces}},
author = {Wang, Jiahong and Du, Yinwei and Coros, Stelian and Thomaszewski, Bernhard},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2024},
pages = {23158-23167},
doi = {10.1109/CVPR52733.2024.02185},
url = {https://mlanthology.org/cvpr/2024/wang2024cvpr-neural/}
}