Benchmarking Robustness of Multimodal Image-Text Models Under Distribution Shift

Abstract

Multimodal image-text models have shown remarkable performance in the past few years. However, evaluating robustness against distribution shifts is crucial before adopting them in real-world applications. In this work, we investigate the robustness of 12 popular open-sourced image-text models under common perturbations on five tasks (image-text retrieval, visual reasoning, visual entailment, image captioning, and text-to-image generation). In particular, we propose several new multimodal robustness benchmarks by applying 17 image perturbation and 16 text perturbation techniques on top of existing datasets. We observe that multimodal models are not robust to image and text perturbations, especially to image perturbations. Among the tested perturbation methods, character-level perturbations constitute the most severe distribution shift for text, and zoom blur is the most severe shift for image data. We also introduce two new robustness metrics (MMI for MultiModal Impact score and MOR for Missing Object Rate) for proper evaluations of multimodal models. We hope our extensive study sheds light on new directions for the development of robust multimodal models. More details can be found on the project webpage: https://MMRobustness.github.io.

Cite

Text

Qiu et al. "Benchmarking Robustness of Multimodal Image-Text Models Under Distribution Shift." Data-centric Machine Learning Research, 2024.

Markdown

[Qiu et al. "Benchmarking Robustness of Multimodal Image-Text Models Under Distribution Shift." Data-centric Machine Learning Research, 2024.](https://mlanthology.org/dmlr/2024/qiu2024dmlr-benchmarking/)

BibTeX

@article{qiu2024dmlr-benchmarking,
  title     = {{Benchmarking Robustness of Multimodal Image-Text Models Under Distribution Shift}},
  author    = {Qiu, Jielin and Zhu, Yi and Shi, Xingjian and Wenzel, Florian and Tang, Zhiqiang and Zhao, Ding and Li, Bo and Li, Mu},
  journal   = {Data-centric Machine Learning Research},
  year      = {2024},
  pages     = {1-56},
  volume    = {1},
  url       = {https://mlanthology.org/dmlr/2024/qiu2024dmlr-benchmarking/}
}