X-Dancer: Expressive Music to Human Dance Video Generation

Abstract

We present X-Dancer, a novel zero-shot music-driven image animation pipeline that creates diverse and long-range lifelike human dance videos from a single static image. As its core, we introduce a unified transformer-diffusion framework, featuring an autoregressive transformer model that synthesize extended and music-synchronized token sequences for 2D body, head and hands poses, which then guide a diffusion model to produce coherent and realistic dance video frames. Unlike traditional methods that primarily generate human motion in 3D, X-Dancer addresses data limitations and enhances scalability by modeling a wide spectrum of 2D dance motions, capturing their nuanced alignment with musical beats through readily available monocular videos. To achieve this, we first build a spatially compositional token representation from 2D human pose labels associated with keypoint confidences, encoding both large articulated body movements (e.g., upper and lower body) and fine-grained motions (e.g., head and hands). We then design a music-to-motion transformer model that autoregressively generates music-aligned dance pose token sequences, incorporating global attention to both musical style and prior motion context. Finally we leverage a diffusion backbone to animate the reference image with these synthesized pose tokens through AdaIN, forming a fully differentiable end-to-end framework. Experimental results demonstrate that X-Dancer is able to produce both diverse and characterized dance videos, substantially outperforming state-of-the-art methods in term of diversity, expressiveness and realism. See our project page for more results: https://zeyuan-chen.com/X-Dancer/.

Cite

Text

Chen et al. "X-Dancer: Expressive Music to Human Dance Video Generation." International Conference on Computer Vision, 2025.

Markdown

[Chen et al. "X-Dancer: Expressive Music to Human Dance Video Generation." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/chen2025iccv-xdancer/)

BibTeX

@inproceedings{chen2025iccv-xdancer,
  title     = {{X-Dancer: Expressive Music to Human Dance Video Generation}},
  author    = {Chen, Zeyuan and Xu, Hongyi and Song, Guoxian and Xie, You and Zhang, Chenxu and Chen, Xin and Wang, Chao and Chang, Di and Luo, Linjie},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {10602-10611},
  url       = {https://mlanthology.org/iccv/2025/chen2025iccv-xdancer/}
}