MMFakeBench: A Mixed-Source Multimodal Misinformation Detection Benchmark for LVLMs
Abstract
Current multimodal misinformation detection (MMD) methods often assume a single source and type of forgery for each sample, which is insufficient for real-world scenarios where multiple forgery sources coexist. The lack of a benchmark for mixed-source misinformation has hindered progress in this field. To address this, we introduce MMFakeBench, the first comprehensive benchmark for mixed-source MMD. MMFakeBench includes 3 critical sources: textual veracity distortion, visual veracity distortion, and cross-modal consistency distortion, along with 12 sub-categories of misinformation forgery types. We further conduct an extensive evaluation of 6 prevalent detection methods and 15 Large Vision-Language Models (LVLMs) on MMFakeBench under a zero-shot setting. The results indicate that current methods struggle under this challenging and realistic mixed-source MMD setting. Additionally, we propose MMD-Agent, a novel approach to integrate the reasoning, action, and tool-use capabilities of LVLM agents, significantly enhancing accuracy and generalization. We believe this study will catalyze future research into more realistic mixed-source multimodal misinformation and provide a fair evaluation of misinformation detection methods.
Cite
Text
Liu et al. "MMFakeBench: A Mixed-Source Multimodal Misinformation Detection Benchmark for LVLMs." International Conference on Learning Representations, 2025.Markdown
[Liu et al. "MMFakeBench: A Mixed-Source Multimodal Misinformation Detection Benchmark for LVLMs." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/liu2025iclr-mmfakebench/)BibTeX
@inproceedings{liu2025iclr-mmfakebench,
title = {{MMFakeBench: A Mixed-Source Multimodal Misinformation Detection Benchmark for LVLMs}},
author = {Liu, Xuannan and Li, Zekun and Li, Pei Pei and Huang, Huaibo and Xia, Shuhan and Cui, Xing and Huang, Linzhi and Deng, Weihong and He, Zhaofeng},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/liu2025iclr-mmfakebench/}
}