Towards Open Domain Text-Driven Synthesis of Multi-Person Motions

Abstract

This work aims to generate natural and diverse group motions of multiple humans from textual descriptions. While single-person text-to-motion generation is extensively studied, it remains challenging to synthesize motions for more than one or two subjects from in-the-wild prompts, mainly due to the lack of available datasets. In this work, we curate human pose and motion datasets by estimating pose information from large-scale image and video datasets. Our models use a transformer-based diffusion framework that accommodates multiple datasets with any number of subjects or frames. Experiments explore both generation of multi-person static poses and generation of multi-person motion sequences. To our knowledge, our method is the first to generate multi-subject motion sequences with high diversity and fidelity from a large variety of textual prompts.

Cite

Text

Shan et al. "Towards Open Domain Text-Driven Synthesis of Multi-Person Motions." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73650-6_5

Markdown

[Shan et al. "Towards Open Domain Text-Driven Synthesis of Multi-Person Motions." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/shan2024eccv-open/) doi:10.1007/978-3-031-73650-6_5

BibTeX

@inproceedings{shan2024eccv-open,
  title     = {{Towards Open Domain Text-Driven Synthesis of Multi-Person Motions}},
  author    = {Shan, Mengyi and Dong, Lu and Han, Yutao and Yao, Yuan and Liu, Tao and Nwogu, Ifeoma and Qi, Guo-Jun and Hill, Mitchell K},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-73650-6_5},
  url       = {https://mlanthology.org/eccv/2024/shan2024eccv-open/}
}