MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models

Abstract

Multimodal foundation models (MMFMs) play a crucial role in various applications, including autonomous driving, healthcare, and virtual assistants. However, several studies have revealed vulnerabilities in these models, such as generating unsafe content by text-to-image models. Existing benchmarks on multimodal models either predominantly assess the helpfulness of these models, or only focus on limited perspectives such as fairness and privacy. In this paper, we present the first unified platform, MMDT (Multimodal DecodingTrust), designed to provide a comprehensive safety and trustworthiness evaluation for MMFMs. Our platform assesses models from multiple perspectives, including safety, hallucination, fairness/bias, privacy, adversarial robustness, and out-of-distribution (OOD) generalization. We have designed various evaluation scenarios and red teaming algorithms under different tasks for each perspective to generate challenging data, forming a high-quality benchmark. We evaluate a range of multimodal models using MMDT, and our findings reveal a series of vulnerabilities and areas for improvement across these perspectives. This work introduces the first comprehensive and unique safety and trustworthiness evaluation platform for MMFMs, paving the way for developing safer and more reliable MMFMs and systems. Our platform and benchmark are available at https://mmdecodingtrust.github.io/.

Cite

Text

Xu et al. "MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models." International Conference on Learning Representations, 2025.

Markdown

[Xu et al. "MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/xu2025iclr-mmdt/)

BibTeX

@inproceedings{xu2025iclr-mmdt,
  title     = {{MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models}},
  author    = {Xu, Chejian and Zhang, Jiawei and Chen, Zhaorun and Xie, Chulin and Kang, Mintong and Potter, Yujin and Wang, Zhun and Yuan, Zhuowen and Xiong, Alexander and Xiong, Zidi and Zhang, Chenhui and Yuan, Lingzhi and Zeng, Yi and Xu, Peiyang and Guo, Chengquan and Zhou, Andy and Tan, Jeffrey Ziwei and Zhao, Xuandong and Pinto, Francesco and Xiang, Zhen and Gai, Yu and Lin, Zinan and Hendrycks, Dan and Li, Bo and Song, Dawn},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/xu2025iclr-mmdt/}
}