SwiftBrush V2: Make Your One-Step Diffusion Model Better than Its Teacher

Abstract

In this paper, we aim to enhance the performance of SwiftBrush, a prominent one-step text-to-image diffusion model, to be competitive with its multi-step Stable Diffusion counterpart. Initially, we explore the quality-diversity trade-off between SwiftBrush and SD Turbo: the former excels in image diversity, while the latter excels in image quality. This observation motivates our proposed modifications in the training methodology, including better weight initialization and efficient LoRA training. Moreover, our introduction of a novel clamped CLIP loss enhances image-text alignment and results in improved image quality. Remarkably, by combining the weights of models trained with efficient LoRA and full training, we achieve a new state-of-the-art one-step diffusion model, achieving an FID of 8.14 and surpassing all GAN-based and multi-step Stable Diffusion models.

Cite

Text

Dao et al. "SwiftBrush V2: Make Your One-Step Diffusion Model Better than Its Teacher." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73007-8_11

Markdown

[Dao et al. "SwiftBrush V2: Make Your One-Step Diffusion Model Better than Its Teacher." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/dao2024eccv-swiftbrush/) doi:10.1007/978-3-031-73007-8_11

BibTeX

@inproceedings{dao2024eccv-swiftbrush,
  title     = {{SwiftBrush V2: Make Your One-Step Diffusion Model Better than Its Teacher}},
  author    = {Dao, Trung Tuan and Nguyen, Thuan Hoang and Van Le, Thanh and Vu, Duc H and Nguyen, Khoi and Pham, Cuong and Tran, Anh T},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-73007-8_11},
  url       = {https://mlanthology.org/eccv/2024/dao2024eccv-swiftbrush/}
}