Getting It Right: Improving Spatial Consistency in Text-to-Image Models

Abstract

One of the key shortcomings in current text-to-image (T2I) models is their inability to consistently generate images which faithfully follow the spatial relationships specified in the text prompt. In this paper, we offer a comprehensive investigation of this limitation, while also developing datasets and methods that support algorithmic solutions to improve spatial reasoning in T2I models. We find that spatial relationships are under-represented in the image descriptions found in current vision-language datasets. To alleviate this data bottleneck, we create SPRIGHT, the first spatially focused, large-scale dataset, by re-captioning 6 million images from 4 widely used vision datasets and through a 3-fold evaluation and analysis pipeline, show that SPRIGHT improves the proportion of spatial relationships in existing datasets. We show the efficacy of SPRIGHT data by showing that using only ∼0.25% of SPRIGHT results in a 22% improvement in generating spatially accurate images while also improving FID and CMMD scores. We also find that training on images containing a larger number of objects leads to substantial improvements in spatial consistency, including state-of-the-art results on T2I-CompBench with a spatial score of 0.2133, by fine-tuning on ¡500 images. Through a set of controlled experiments and ablations, we document additional findings that could support future work that seeks to understand factors that affect spatial consistency in text-to-image models. Project page : https://spright-t2i.github.io/

Cite

Text

Chatterjee et al. "Getting It Right: Improving Spatial Consistency in Text-to-Image Models." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72670-5_12

Markdown

[Chatterjee et al. "Getting It Right: Improving Spatial Consistency in Text-to-Image Models." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/chatterjee2024eccv-getting/) doi:10.1007/978-3-031-72670-5_12

BibTeX

@inproceedings{chatterjee2024eccv-getting,
  title     = {{Getting It Right: Improving Spatial Consistency in Text-to-Image Models}},
  author    = {Chatterjee, Agneet and Stan, Gabriela Ben Melech and Aflalo, Estelle Guez and Paul, Sayak and Ghosh, Dhruba and Gokhale, Tejas and Schmidt, Ludwig and Hajishirzi, Hanna and Lal, Vasudev and Baral, Chitta R and Yang, Yezhou},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-72670-5_12},
  url       = {https://mlanthology.org/eccv/2024/chatterjee2024eccv-getting/}
}