MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation

Abstract

We propose the first joint audio-video generation framework that brings engaging watching and listening experiences simultaneously, towards high-quality realistic videos. To generate joint audio-video pairs, we propose a novel Multi-Modal Diffusion model (i.e., MM-Diffusion), with two-coupled denoising autoencoders. In contrast to existing single-modal diffusion models, MM-Diffusion consists of a sequential multi-modal U-Net for a joint denoising process by design. Two subnets for audio and video learn to gradually generate aligned audio-video pairs from Gaussian noises. To ensure semantic consistency across modalities, we propose a novel random-shift based attention block bridging over the two subnets, which enables efficient cross-modal alignment, and thus reinforces the audio-video fidelity for each other. Extensive experiments show superior results in unconditional audio-video generation, and zero-shot conditional tasks (e.g., video-to-audio). In particular, we achieve the best FVD and FAD on Landscape and AIST++ dancing datasets. Turing tests of 10k votes further demonstrate dominant preferences for our model.

Cite

Text

Ruan et al. "MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00985

Markdown

[Ruan et al. "MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/ruan2023cvpr-mmdiffusion/) doi:10.1109/CVPR52729.2023.00985

BibTeX

@inproceedings{ruan2023cvpr-mmdiffusion,
  title     = {{MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation}},
  author    = {Ruan, Ludan and Ma, Yiyang and Yang, Huan and He, Huiguo and Liu, Bei and Fu, Jianlong and Yuan, Nicholas Jing and Jin, Qin and Guo, Baining},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {10219-10228},
  doi       = {10.1109/CVPR52729.2023.00985},
  url       = {https://mlanthology.org/cvpr/2023/ruan2023cvpr-mmdiffusion/}
}