Word-Level Explanations for Analyzing Bias in Text-to-Image Models

Abstract

Text-to-image models take a sentence (i.e. prompt) and generate images associated with this input prompt. These models have created award wining-art, videos, and even synthetic datasets. However, text-to-image (T2I) models can generate images that underrepresent minorities based on race and sex. This paper investigates which word in the input prompt is responsible for bias in generated images. We introduce a method for computing scores for each word in the prompt; these scores represent its influence on biases in the model’s output. Our method follows the principle of explaining by removing, leveraging masked language models to calculate the influence scores. We perform experiments on Stable Diffusion to demonstrate that our method identifies the replication of societal stereotypes in generated images.

Cite

Text

Lin et al. "Word-Level Explanations for Analyzing Bias in Text-to-Image Models." ICML 2023 Workshops: DeployableGenerativeAI, 2023.

Markdown

[Lin et al. "Word-Level Explanations for Analyzing Bias in Text-to-Image Models." ICML 2023 Workshops: DeployableGenerativeAI, 2023.](https://mlanthology.org/icmlw/2023/lin2023icmlw-wordlevel/)

BibTeX

@inproceedings{lin2023icmlw-wordlevel,
  title     = {{Word-Level Explanations for Analyzing Bias in Text-to-Image Models}},
  author    = {Lin, Alexander and Paes, Lucas Monteiro and Tanneru, Sree Harsha and Srinivas, Suraj and Lakkaraju, Himabindu},
  booktitle = {ICML 2023 Workshops: DeployableGenerativeAI},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/lin2023icmlw-wordlevel/}
}