Robust Generation of Dynamical Patterns in Human Motion by a Deep Belief Nets
Abstract
We propose a Deep Belief Net model for robust motion generation, which consists of two layers of Restricted Boltzmann Machines (RBMs). The lower layer has multiple RBMs for encoding real-valued spatial patterns of motion frames into compact representations. The upper layer has one conditional RBM for learning temporal constraints on transitions between those compact representations. This separation of spatial and temporal learning makes it possible to reproduce many attractive dynamical behaviors such as walking by a stable limit cycle, a gait transition by bifurcation, synchronization of limbs by phase-locking, and easy top-down control. We trained the model with human motion capture data and the results of motion generation are reported here.
Cite
Text
Sukhbaatar et al. "Robust Generation of Dynamical Patterns in Human Motion by a Deep Belief Nets." Proceedings of the Third Asian Conference on Machine Learning, 2011.Markdown
[Sukhbaatar et al. "Robust Generation of Dynamical Patterns in Human Motion by a Deep Belief Nets." Proceedings of the Third Asian Conference on Machine Learning, 2011.](https://mlanthology.org/acml/2011/sukhbaatar2011acml-robust/)BibTeX
@inproceedings{sukhbaatar2011acml-robust,
title = {{Robust Generation of Dynamical Patterns in Human Motion by a Deep Belief Nets}},
author = {Sukhbaatar, Sainbaya and Makino, Takaki and Aihara, Kazuyuki and Chikayama, Takashi},
booktitle = {Proceedings of the Third Asian Conference on Machine Learning},
year = {2011},
pages = {231-246},
volume = {20},
url = {https://mlanthology.org/acml/2011/sukhbaatar2011acml-robust/}
}