GLIGEN: Open-Set Grounded Text-to-Image Generation

Abstract

Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN: Open-Set Grounded Text-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN's zero-shot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin.

Cite

Text

Li et al. "GLIGEN: Open-Set Grounded Text-to-Image Generation." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.02156

Markdown

[Li et al. "GLIGEN: Open-Set Grounded Text-to-Image Generation." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/li2023cvpr-gligen/) doi:10.1109/CVPR52729.2023.02156

BibTeX

@inproceedings{li2023cvpr-gligen,
  title     = {{GLIGEN: Open-Set Grounded Text-to-Image Generation}},
  author    = {Li, Yuheng and Liu, Haotian and Wu, Qingyang and Mu, Fangzhou and Yang, Jianwei and Gao, Jianfeng and Li, Chunyuan and Lee, Yong Jae},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {22511-22521},
  doi       = {10.1109/CVPR52729.2023.02156},
  url       = {https://mlanthology.org/cvpr/2023/li2023cvpr-gligen/}
}