Toward Continuous-Time Representations of Human Motion
Abstract
For human motion understanding and generation, it is common to represent the motion sequence via a hidden state of a recurrent neural network, learned in an end-to-end fashion. While powerful, this representation is inflexible as these recurrent models are trained with a specific frame rate, and the hidden state is further hard to interpret. In this paper, we show that we can instead represent the continuous motion via latent parametric curves, leveraging techniques from computer graphics and signal processing. Our parametric representation is powerful enough to faithfully represent continuous motion with few parameters, easy to obtain, and is effective when used for downstream tasks. We validate the proposed method on AMASS and Human3.6M datasets through reconstruction and on a downstream task of point-to-point prediction, and show that our method is able to generate realistic motion. See our demo at www.github.com/WeiyuDu/motion-encode .
Cite
Text
Du et al. "Toward Continuous-Time Representations of Human Motion." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-65414-6_37Markdown
[Du et al. "Toward Continuous-Time Representations of Human Motion." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/du2020eccvw-continuoustime/) doi:10.1007/978-3-030-65414-6_37BibTeX
@inproceedings{du2020eccvw-continuoustime,
title = {{Toward Continuous-Time Representations of Human Motion}},
author = {Du, Weiyu and Rybkin, Oleh and Zhang, Lingzhi and Shi, Jianbo},
booktitle = {European Conference on Computer Vision Workshops},
year = {2020},
pages = {543-548},
doi = {10.1007/978-3-030-65414-6_37},
url = {https://mlanthology.org/eccvw/2020/du2020eccvw-continuoustime/}
}