Learning to Sit: Synthesizing Human-Chair Interactions via Hierarchical Control

Abstract

Recent progress on physics-based character animation has shown impressive breakthroughs on human motion synthesis, through imitating motion capture data via deep reinforcement learning. However, results have mostly been demonstrated on imitating a single distinct motion pattern, and do not generalize to interactive tasks that require flexible motion patterns due to varying human-object spatial configurations. To bridge this gap, we focus on one class of interactive tasks---sitting onto a chair. We propose a hierarchical reinforcement learning framework which relies on a collection of subtask controllers trained to imitate simple, reusable mocap motions, and a meta controller trained to execute the subtasks properly to complete the main task. We experimentally demonstrate the strength of our approach over different non-hierarchical and hierarchical baselines. We also show that our approach can be applied to motion prediction given an image input. A supplementary video can be found at https://youtu.be/3CeN0OGz2cA.

Cite

Text

Chao et al. "Learning to Sit: Synthesizing Human-Chair Interactions via Hierarchical Control." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I7.16736

Markdown

[Chao et al. "Learning to Sit: Synthesizing Human-Chair Interactions via Hierarchical Control." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/chao2021aaai-learning/) doi:10.1609/AAAI.V35I7.16736

BibTeX

@inproceedings{chao2021aaai-learning,
  title     = {{Learning to Sit: Synthesizing Human-Chair Interactions via Hierarchical Control}},
  author    = {Chao, Yu-Wei and Yang, Jimei and Chen, Weifeng and Deng, Jia},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {5887-5895},
  doi       = {10.1609/AAAI.V35I7.16736},
  url       = {https://mlanthology.org/aaai/2021/chao2021aaai-learning/}
}