DreamDistribution: Learning Prompt Distribution for Diverse In-Distribution Generation

Abstract

The popularization of Text-to-Image (T2I) diffusion models enables the generation of high-quality images from text descriptions. However, generating diverse customized images with reference visual attributes remains challenging. This work focuses on personalizing T2I diffusion models at a more abstract concept or category level, adapting commonalities from a set of reference images while creating new instances with sufficient variations. We introduce a solution that allows a pretrained T2I diffusion model to learn a set of soft prompts, enabling the generation of novel images by sampling prompts from the learned distribution. These prompts offer text-guided editing capabilities and additional flexibility in controlling variation and mixing between multiple distributions. We also show the adaptability of the learned prompt distribution to other tasks, such as text-to-3D. Finally we demonstrate effectiveness of our approach through quantitative analysis including automatic evaluation and human assessment.

Cite

Text

Zhao et al. "DreamDistribution: Learning Prompt Distribution for Diverse In-Distribution Generation." International Conference on Learning Representations, 2025.

Markdown

[Zhao et al. "DreamDistribution: Learning Prompt Distribution for Diverse In-Distribution Generation." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhao2025iclr-dreamdistribution/)

BibTeX

@inproceedings{zhao2025iclr-dreamdistribution,
  title     = {{DreamDistribution: Learning Prompt Distribution for Diverse In-Distribution Generation}},
  author    = {Zhao, Brian Nlong and Xiao, Yuhang and Xu, Jiashu and Jiang, Xinyang and Yang, Yifan and Li, Dongsheng and Itti, Laurent and Vineet, Vibhav and Ge, Yunhao},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/zhao2025iclr-dreamdistribution/}
}