Learning Quantized Adaptive Conditions for Diffusion Models

Abstract

The curvature of ODE trajectories in diffusion models hinders their ability to generate high-quality images in a few number of function evaluations (NFE). In this paper, we propose a novel and effective approach to reduce trajectory curvature by utilizing adaptive conditions. By employing a extremely light-weight quantized encoder, our method incurs only an additional 1% of training parameters, eliminates the need for extra regularization terms, yet achieves significantly better sample quality. Our approach accelerates ODE sampling while preserving the downstream task image editing capabilities of SDE techniques. Extensive experiments verify that our method can generate high quality results under extremely limited sampling costs. With only 6 NFE, we achieve 5.14 FID on CIFAR-10, 6.91 FID on FFHQ 64×64 and 3.10 FID on AFHQv2.

Cite

Text

Liang et al. "Learning Quantized Adaptive Conditions for Diffusion Models." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73004-7_21

Markdown

[Liang et al. "Learning Quantized Adaptive Conditions for Diffusion Models." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/liang2024eccv-learning/) doi:10.1007/978-3-031-73004-7_21

BibTeX

@inproceedings{liang2024eccv-learning,
  title     = {{Learning Quantized Adaptive Conditions for Diffusion Models}},
  author    = {Liang, Yuchen and Tian, Yuchuan and Yu, Lei and Tang, Huaao and Hu, Jie and Fang, Xiangzhong and Chen, Hanting},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-73004-7_21},
  url       = {https://mlanthology.org/eccv/2024/liang2024eccv-learning/}
}