Zigzag Diffusion Sampling: Diffusion Models Can Self-Improve via Self-Reflection

Abstract

Diffusion models, the most popular generative paradigm so far, can inject conditional information into the generation path to guide the latent towards desired directions. However, existing text-to-image diffusion models often fail to maintain high image quality and high prompt-image alignment for those challenging prompts. To mitigate this issue and enhance existing pretrained diffusion models, we mainly made three contributions in this paper. First, we propose **diffusion self-reflection** that alternately performs denoising and inversion and demonstrate that such diffusion self-reflection can leverage the guidance gap between denoising and inversion to capture prompt-related semantic information with theoretical and empirical evidence. Second, motivated by theoretical analysis, we derive Zigzag Diffusion Sampling (Z-Sampling), a novel self-reflection-based diffusion sampling method that leverages the guidance gap between denosing and inversion to accumulate semantic information step by step along the sampling path, leading to improved sampling results. Moreover, as a plug-and-play method, Z-Sampling can be generally applied to various diffusion models (e.g., accelerated ones and Transformer-based ones) with very limited coding and computational costs. Third, our extensive experiments demonstrate that Z-Sampling can generally and significantly enhance generation quality across various benchmark datasets, diffusion models, and performance evaluation metrics. For example, DreamShaper with Z-Sampling can self-improve with the HPSv2 winning rate up to **94%** over the original results. Moreover, Z-Sampling can further enhance existing diffusion models combined with other orthogonal methods, including Diffusion-DPO. The code is publicly available at [github.com/xie-lab-ml/Zigzag-Diffusion-Sampling](https://github.com/xie-lab-ml/Zigzag-Diffusion-Sampling).

Cite

Text

LiChen et al. "Zigzag Diffusion Sampling: Diffusion Models Can Self-Improve via Self-Reflection." International Conference on Learning Representations, 2025.

Markdown

[LiChen et al. "Zigzag Diffusion Sampling: Diffusion Models Can Self-Improve via Self-Reflection." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/lichen2025iclr-zigzag/)

BibTeX

@inproceedings{lichen2025iclr-zigzag,
  title     = {{Zigzag Diffusion Sampling: Diffusion Models Can Self-Improve via Self-Reflection}},
  author    = {LiChen, Bai and Shao, Shitong and Zhou, Zikai and Qi, Zipeng and Xu, Zhiqiang and Xiong, Haoyi and Xie, Zeke},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/lichen2025iclr-zigzag/}
}