MDD: A Dataset for Text-and-Music Conditioned Duet Dance Generation
Abstract
We introduce Multimodal DuetDance (MDD), a diverse multimodal benchmark dataset designed for text-controlled and music-conditioned 3D duet dance motion generation. Our dataset comprises 620 minutes of high-quality motion capture data performed by professional dancers, synchronized with music, and detailed with over 10K fine-grained natural language descriptions. The annotations capture a rich movement vocabulary, detailing spatial relationships, body movements, and rhythm, making MDD the first dataset to seamlessly integrate human motions, music, and text for duet dance generation. We introduce two novel tasks supported by our dataset: (1) Text-to-Duet, where given music and a textual prompt, both the leader and follower dance motion are generated (2) Text-to-Dance Accompaniment, where given music, textual prompt, and the leader's motion, the follower's motion is generated in a cohesive, text-aligned manner. We include baseline evaluations on both tasks to support future research. Please refer to the project website for the latest updates.
Cite
Text
Gupta et al. "MDD: A Dataset for Text-and-Music Conditioned Duet Dance Generation." International Conference on Computer Vision, 2025.Markdown
[Gupta et al. "MDD: A Dataset for Text-and-Music Conditioned Duet Dance Generation." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/gupta2025iccv-mdd/)BibTeX
@inproceedings{gupta2025iccv-mdd,
title = {{MDD: A Dataset for Text-and-Music Conditioned Duet Dance Generation}},
author = {Gupta, Prerit and Fotso-Puepi, Jason Alexander and Li, Zhengyuan and Mehta, Jay and Bera, Aniket},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {13932-13941},
url = {https://mlanthology.org/iccv/2025/gupta2025iccv-mdd/}
}