Fair Image Generation from Pre-Trained Models by Probabilistic Modeling

Abstract

The production of high-fidelity images by generative models has been transformative to the space of artificial intelligence. Yet, while the generated images are of high quality, the images tend to mirror biases present in the dataset they are trained on. While there has been an influx of work to tackle fair ML broadly, existing works on fair image generation typically rely on modifying the model architecture or fine-tuning an existing generative model which requires costly retraining time. In this paper, we use a family of tractable probabilistic models called probabilistic circuits (PCs), which can be equipped to a pre-trained generative model to produce fair images without retraining. We show that for a given pre-trained generative model, our method only requires a small fair reference dataset to train the PC, removing the need to collect a large (fair) dataset to retrain the generative model. Our experimental results show that our proposed method can achieve a balance between training resources and ensuring fairness and quality of generated images.

Cite

Text

Ahmadi et al. "Fair Image Generation from Pre-Trained Models by Probabilistic Modeling." NeurIPS 2024 Workshops: SafeGenAi, 2024.

Markdown

[Ahmadi et al. "Fair Image Generation from Pre-Trained Models by Probabilistic Modeling." NeurIPS 2024 Workshops: SafeGenAi, 2024.](https://mlanthology.org/neuripsw/2024/ahmadi2024neuripsw-fair/)

BibTeX

@inproceedings{ahmadi2024neuripsw-fair,
  title     = {{Fair Image Generation from Pre-Trained Models by Probabilistic Modeling}},
  author    = {Ahmadi, Mahdi and Leland, John and Chatterjee, Agneet and Choi, YooJung},
  booktitle = {NeurIPS 2024 Workshops: SafeGenAi},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/ahmadi2024neuripsw-fair/}
}