Anti-Exposure Bias in Diffusion Models

Abstract

Diffusion models (DMs) have achieved record-breaking performance in image generation tasks. Nevertheless, in practice, the training-sampling discrepancy, caused by score estimation error and discretization error, limits the modeling ability of DMs, a phenomenon known as exposure bias. To alleviate such exposure bias and further improve the generative performance, we put forward a prompt learning framework built upon a lightweight prompt prediction model. Concretely, our model learns an anti-bias prompt for the generated sample at each sampling step, aiming to compensate for the exposure bias that arises. Following this design philosophy, our framework rectifies the sampling trajectory to match the training trajectory, thereby reducing the divergence between the target data distribution and the modeling distribution. To train the prompt prediction model, we simulate exposure bias by constructing training data and introduce a time-dependent weighting function for optimization. Empirical results on various DMs demonstrate the superiority of our prompt learning framework across three benchmark datasets. Importantly, the optimized prompt prediction model effectively improves image quality with only a 5\% increase in sampling overhead, which remains negligible.

Cite

Text

Zhang et al. "Anti-Exposure Bias in Diffusion Models." International Conference on Learning Representations, 2025.

Markdown

[Zhang et al. "Anti-Exposure Bias in Diffusion Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhang2025iclr-antiexposure/)

BibTeX

@inproceedings{zhang2025iclr-antiexposure,
  title     = {{Anti-Exposure Bias in Diffusion Models}},
  author    = {Zhang, Junyu and Liu, Daochang and Park, Eunbyung and Zhang, Shichao and Xu, Chang},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/zhang2025iclr-antiexposure/}
}