Self-Corrected Flow Distillation for Consistent One-Step and Few-Step Image Generation

Abstract

Flow matching has emerged as a promising framework for training generative models, demonstrating impressive empirical performance while offering relative ease of training compared to diffusion-based models. However, this method still requires numerous function evaluations in the sampling process. To address these limitations, we introduce a self-corrected flow distillation method that effectively integrates consistency models and adversarial training within the flow-matching framework. This work is a pioneer in achieving consistent generation quality in both few-step and one-step sampling. Our extensive experiments validate the effectiveness of our method, yielding superior results both quantitatively and qualitatively on CelebA-HQ and zero-shot benchmarks on the COCO dataset.

Cite

Text

Dao et al. "Self-Corrected Flow Distillation for Consistent One-Step and Few-Step Image Generation." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I3.32269

Markdown

[Dao et al. "Self-Corrected Flow Distillation for Consistent One-Step and Few-Step Image Generation." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/dao2025aaai-self/) doi:10.1609/AAAI.V39I3.32269

BibTeX

@inproceedings{dao2025aaai-self,
  title     = {{Self-Corrected Flow Distillation for Consistent One-Step and Few-Step Image Generation}},
  author    = {Dao, Quan and Phung, Hao and Dao, Trung Tuan and Metaxas, Dimitris N. and Tran, Anh Tuan},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {2654-2662},
  doi       = {10.1609/AAAI.V39I3.32269},
  url       = {https://mlanthology.org/aaai/2025/dao2025aaai-self/}
}