VideoTetris: Towards Compositional Text-to-Video Generation

Abstract

Diffusion models have demonstrated great success in text-to-video (T2V) generation. However, existing methods may face challenges when handling complex (long) video generation scenarios that involve multiple objects or dynamic changes in object numbers. To address these limitations, we propose VideoTetris, a novel framework that enables compositional T2V generation. Specifically, we propose spatio-temporal compositional diffusion to precisely follow complex textual semantics by manipulating and composing the attention maps of denoising networks spatially and temporally. Moreover, we propose a new dynamic-aware data processing pipeline and a consistency regularization method to enhance the consistency of auto-regressive video generation. Extensive experiments demonstrate that our VideoTetris achieves impressive qualitative and quantitative results in compositional T2V generation. Code is available at: https://github.com/YangLing0818/VideoTetris

Cite

Text

Tian et al. "VideoTetris: Towards Compositional Text-to-Video Generation." Neural Information Processing Systems, 2024. doi:10.52202/079017-0928

Markdown

[Tian et al. "VideoTetris: Towards Compositional Text-to-Video Generation." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/tian2024neurips-videotetris/) doi:10.52202/079017-0928

BibTeX

@inproceedings{tian2024neurips-videotetris,
  title     = {{VideoTetris: Towards Compositional Text-to-Video Generation}},
  author    = {Tian, Ye and Yang, Ling and Yang, Haotian and Gao, Yuan and Deng, Yufan and Chen, Jingmin and Wang, Xintao and Yu, Zhaochen and Tao, Xin and Wan, Pengfei and Zhang, Di and Cui, Bin},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0928},
  url       = {https://mlanthology.org/neurips/2024/tian2024neurips-videotetris/}
}