Relay Diffusion: Unifying Diffusion Process Across Resolutions for Image Synthesis
Abstract
Diffusion models achieved great success in image synthesis, but still face challenges in high-resolution generation. Through the lens of discrete cosine transformation, we find the main reason is that *the same noise level on a higher resolution results in a higher Signal-to-Noise Ratio in the frequency domain*. In this work, we present Relay Diffusion Model (RDM), which transfers a low-resolution image or noise into an equivalent high-resolution one for diffusion model via blurring diffusion and block noise. Therefore, the diffusion process can continue seamlessly in any new resolution or model without restarting from pure noise or low-resolution conditioning. RDM achieves state-of-the-art FID on CelebA-HQ and sFID on ImageNet 256$\times$256, surpassing previous works such as ADM, LDM and DiT by a large margin. All the codes and checkpoints are open-sourced at \url{https://github.com/THUDM/RelayDiffusion}.
Cite
Text
Teng et al. "Relay Diffusion: Unifying Diffusion Process Across Resolutions for Image Synthesis." International Conference on Learning Representations, 2024.Markdown
[Teng et al. "Relay Diffusion: Unifying Diffusion Process Across Resolutions for Image Synthesis." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/teng2024iclr-relay/)BibTeX
@inproceedings{teng2024iclr-relay,
title = {{Relay Diffusion: Unifying Diffusion Process Across Resolutions for Image Synthesis}},
author = {Teng, Jiayan and Zheng, Wendi and Ding, Ming and Hong, Wenyi and Wangni, Jianqiao and Yang, Zhuoyi and Tang, Jie},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/teng2024iclr-relay/}
}