GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models

Abstract

Diffusion models have recently been shown to generate high-quality synthetic images, especially when paired with a guidance technique to trade off diversity for fidelity. We explore diffusion models for the problem of text-conditional image synthesis and compare two different guidance strategies: CLIP guidance and classifier-free guidance. We find that the latter is preferred by human evaluators for both photorealism and caption similarity, and often produces photorealistic samples. Samples from a 3.5 billion parameter text-conditional diffusion model using classifier-free guidance are favored by human evaluators to those from DALL-E, even when the latter uses expensive CLIP reranking. Additionally, we find that our models can be fine-tuned to perform image inpainting, enabling powerful text-driven image editing. We train a smaller model on a filtered dataset and release the code and weights at https://github.com/openai/glide-text2im.

Cite

Text

Nichol et al. "GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models." International Conference on Machine Learning, 2022.

Markdown

[Nichol et al. "GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/nichol2022icml-glide/)

BibTeX

@inproceedings{nichol2022icml-glide,
  title     = {{GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models}},
  author    = {Nichol, Alexander Quinn and Dhariwal, Prafulla and Ramesh, Aditya and Shyam, Pranav and Mishkin, Pamela and Mcgrew, Bob and Sutskever, Ilya and Chen, Mark},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {16784-16804},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/nichol2022icml-glide/}
}