Debiasing Multimodal Models via Causal Information Minimization

Abstract

Most existing debiasing methods for multimodal models, including causal intervention and inference methods, utilize approximate heuristics to represent the biases, such as shallow features from early stages of training or unimodal features for multimodal tasks like VQA, etc., which may not be accurate. In this paper, we study bias arising from confounders in a causal graph for multimodal data, and examine a novel approach that leverages causally-motivated information minimization to learn the confounder representations. Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data. Hence, minimizing the information content of features obtained from a pretrained biased model helps learn the simplest predictive features that capture the underlying data distribution. We treat these features as confounder representations and use them via methods motivated by causal theory to remove bias from models. We find that the learned confounder representations indeed capture dataset biases and the proposed debiasing methods improve out-of-distribution (OOD) performance on multiple multimodal datasets without sacrificing in-distribution performance.

Cite

Text

Patil et al. "Debiasing Multimodal Models via Causal Information Minimization." NeurIPS 2023 Workshops: CRL, 2023.

Markdown

[Patil et al. "Debiasing Multimodal Models via Causal Information Minimization." NeurIPS 2023 Workshops: CRL, 2023.](https://mlanthology.org/neuripsw/2023/patil2023neuripsw-debiasing/)

BibTeX

@inproceedings{patil2023neuripsw-debiasing,
  title     = {{Debiasing Multimodal Models via Causal Information Minimization}},
  author    = {Patil, Vaidehi and Maharana, Adyasha and Bansal, Mohit},
  booktitle = {NeurIPS 2023 Workshops: CRL},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/patil2023neuripsw-debiasing/}
}