Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations
Abstract
Large Vision Language Models (LVLMs) such as LLaVA have demonstrated impressive capabilities as general-purpose chatbots that can engage in conversations about a provided input image. However, their responses are influenced by societal biases present in their training datasets, leading to undesirable differences in how the model responds when presented with images depicting people of different demographics. In this work, we propose a novel debiasing framework for LVLMs by directly ablating biased attributes during text generation to avoid generating text related to protected attributes, or even representing them internally. Our method requires no training and a relatively small amount of representative biased outputs ($\sim$1000 samples). Our experiments show that not only can we can minimize the propensity of LVLMs to generate text related to protected attributes, but we can even use synthetic data to inform the ablation while retaining captioning performance on real data such as COCO. Furthermore, we find the resulting generations from a debiased LVLM exhibit similar accuracy as a baseline biased model, showing that debiasing effects can be achieved without sacrificing model performance.
Cite
Text
Ratzlaff et al. "Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations." NeurIPS 2024 Workshops: SafeGenAi, 2024.Markdown
[Ratzlaff et al. "Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations." NeurIPS 2024 Workshops: SafeGenAi, 2024.](https://mlanthology.org/neuripsw/2024/ratzlaff2024neuripsw-debiasing/)BibTeX
@inproceedings{ratzlaff2024neuripsw-debiasing,
title = {{Debiasing Large Vision-Language Models by Ablating Protected Attribute Representations}},
author = {Ratzlaff, Neale and Olson, Matthew Lyle and Hinck, Musashi and Tseng, Shao-Yen and Lal, Vasudev and Howard, Phillip},
booktitle = {NeurIPS 2024 Workshops: SafeGenAi},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/ratzlaff2024neuripsw-debiasing/}
}