Denoising Diffusion Step-Aware Models
Abstract
Denoising Diffusion Probabilistic Models (DDPMs) have garnered popularity for data generation across various domains. However, a significant bottleneck is the necessity for whole-network computation during every step of the generative process, leading to high computational overheads. This paper presents a novel framework, Denoising Diffusion Step-aware Models (DDSM), to address this challenge. Unlike conventional approaches, DDSM employs a spectrum of neural networks whose sizes are adapted according to the importance of each generative step, as determined through evolutionary search. This step-wise network variation effectively circumvents redundant computational efforts, particularly in less critical steps, thereby enhancing the efficiency of the diffusion model. Furthermore, the step-aware design can be seamlessly integrated with other efficiency-geared diffusion models such as DDIMs and latent diffusion, thus broadening the scope of computational savings. Empirical evaluations demonstrate that DDSM achieves computational savings of 49% for CIFAR-10, 61% for CelebA-HQ, 59% for LSUN-bedroom, 71% for AFHQ, and 76% for ImageNet, all without compromising the generation quality. Our code and models are available at https://github.com/EnVision-Research/DDSM.
Cite
Text
Yang et al. "Denoising Diffusion Step-Aware Models." International Conference on Learning Representations, 2024.Markdown
[Yang et al. "Denoising Diffusion Step-Aware Models." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/yang2024iclr-denoising/)BibTeX
@inproceedings{yang2024iclr-denoising,
title = {{Denoising Diffusion Step-Aware Models}},
author = {Yang, Shuai and Chen, Yukang and Wang, Luozhou and Liu, Shu and Chen, Ying-Cong},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/yang2024iclr-denoising/}
}