On Inference Stability for Diffusion Models
Abstract
Denoising Probabilistic Models (DPMs) represent an emerging domain of generative models that excel in generating diverse and high-quality images. However, most current training methods for DPMs often neglect the correlation between timesteps, limiting the model's performance in generating images effectively. Notably, we theoretically point out that this issue can be caused by the cumulative estimation gap between the predicted and the actual trajectory. To minimize that gap, we propose a novel sequence-aware loss that aims to reduce the estimation gap to enhance the sampling quality. Furthermore, we theoretically show that our proposed loss function is a tighter upper bound of the estimation loss in comparison with the conventional loss in DPMs. Experimental results on several benchmark datasets including CIFAR10, CelebA, and CelebA-HQ consistently show a remarkable improvement of our proposed method regarding the image generalization quality measured by FID and Inception Score compared to several DPM baselines. Our code and pre-trained checkpoints are available at https://github.com/VinAIResearch/SA-DPM.
Cite
Text
Nguyen et al. "On Inference Stability for Diffusion Models." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I13.29359Markdown
[Nguyen et al. "On Inference Stability for Diffusion Models." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/nguyen2024aaai-inference/) doi:10.1609/AAAI.V38I13.29359BibTeX
@inproceedings{nguyen2024aaai-inference,
title = {{On Inference Stability for Diffusion Models}},
author = {Nguyen, Viet and Vu, Giang and Thanh, Tung Nguyen and Than, Khoat and Tran, Toan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {14449-14456},
doi = {10.1609/AAAI.V38I13.29359},
url = {https://mlanthology.org/aaai/2024/nguyen2024aaai-inference/}
}