DSAG: A Scalable Deep Framework for Action-Conditioned Multi-Actor Full Body Motion Synthesis

Abstract

We introduce DSAG, a controllable deep neural framework for action-conditioned generation of full body multi-actor variable duration actions. To compensate for incompletely detailed finger joints in existing large-scale datasets, we introduce full body dataset variants with detailed finger joints. To overcome shortcomings in existing generative approaches, we introduce dedicated representations for encoding finger joints. We also introduce novel spatiotemporal transformation blocks with multi-head self-attention and specialized temporal processing. The design choices enable generations for a large range in body joint counts (24 - 52), frame rates (13 - 50), global body movement (in-place, locomotion) and action categories (12 - 120), across multiple datasets (NTU-120, HumanAct12, UESTC, Human3.6M). Our experimental results demonstrate DSAG's significant improvements over state-of-the-art, its suitability for action-conditioned generation at scale.

Cite

Text

Gupta et al. "DSAG: A Scalable Deep Framework for Action-Conditioned Multi-Actor Full Body Motion Synthesis." Winter Conference on Applications of Computer Vision, 2023.

Markdown

[Gupta et al. "DSAG: A Scalable Deep Framework for Action-Conditioned Multi-Actor Full Body Motion Synthesis." Winter Conference on Applications of Computer Vision, 2023.](https://mlanthology.org/wacv/2023/gupta2023wacv-dsag/)

BibTeX

@inproceedings{gupta2023wacv-dsag,
  title     = {{DSAG: A Scalable Deep Framework for Action-Conditioned Multi-Actor Full Body Motion Synthesis}},
  author    = {Gupta, Debtanu and Maheshwari, Shubh and Kalakonda, Sai Shashank and Vaidyula, Manasvi and Sarvadevabhatla, Ravi Kiran},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2023},
  pages     = {4300-4308},
  url       = {https://mlanthology.org/wacv/2023/gupta2023wacv-dsag/}
}