Matryoshka Diffusion Models

Abstract

Diffusion models are the de-facto approach for generating high-quality images and videos, but learning high-dimensional models remains a formidable task due to computational and optimization challenges. Existing methods often resort to training cascaded models in pixel space, or using a downsampled latent space of a separately trained auto-encoder. In this paper, we introduce Matryoshka Diffusion (MDM), an end-to-end framework for high-resolution image and video synthesis. We propose a diffusion process that denoises inputs at multiple resolutions jointly and uses a NestedUNet architecture where features and parameters for small-scale inputs are nested within those of large scales. In addition, MDM enables a progressive training schedule from lower to higher resolutions, which leads to significant improvements in optimization for high-resolution generation. We demonstrate the effectiveness of our approach on various benchmarks, including class-conditioned image generation, high-resolution text-to-image, and text-to-video applications. Remarkably, we can train a single pixel-space model at resolutions of up to 1024x1024 pixels, demonstrating strong zero-shot generalization using the CC12M dataset, which contains only 12 million images. Code and pre-trained checkpoints are released at https://github.com/apple/ml-mdm.

Cite

Text

Gu et al. "Matryoshka Diffusion Models." International Conference on Learning Representations, 2024.

Markdown

[Gu et al. "Matryoshka Diffusion Models." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/gu2024iclr-matryoshka/)

BibTeX

@inproceedings{gu2024iclr-matryoshka,
  title     = {{Matryoshka Diffusion Models}},
  author    = {Gu, Jiatao and Zhai, Shuangfei and Zhang, Yizhe and Susskind, Joshua M. and Jaitly, Navdeep},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/gu2024iclr-matryoshka/}
}