NeMF: Neural Motion Fields for Kinematic Animation

Abstract

We present an implicit neural representation to learn the spatio-temporal space of kinematic motions. Unlike previous work that represents motion as discrete sequential samples, we propose to express the vast motion space as a continuous function over time, hence the name Neural Motion Fields (NeMF). Specifically, we use a neural network to learn this function for miscellaneous sets of motions, which is designed to be a generative model conditioned on a temporal coordinate $t$ and a random vector $z$ for controlling the style. The model is then trained as a Variational Autoencoder (VAE) with motion encoders to sample the latent space. We train our model with a diverse human motion dataset and quadruped dataset to prove its versatility, and finally deploy it as a generic motion prior to solve task-agnostic problems and show its superiority in different motion generation and editing applications, such as motion interpolation, in-betweening, and re-navigating. More details can be found on our project page: https://cs.yale.edu/homes/che/projects/nemf/.

Cite

Text

He et al. "NeMF: Neural Motion Fields for Kinematic Animation." Neural Information Processing Systems, 2022.

Markdown

[He et al. "NeMF: Neural Motion Fields for Kinematic Animation." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/he2022neurips-nemf/)

BibTeX

@inproceedings{he2022neurips-nemf,
  title     = {{NeMF: Neural Motion Fields for Kinematic Animation}},
  author    = {He, Chengan and Saito, Jun and Zachary, James and Rushmeier, Holly and Zhou, Yi},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/he2022neurips-nemf/}
}