Pioneering Explainable Video Fact-Checking with a New Dataset and Multi-Role Multimodal Model Approach

Abstract

Existing video fact-checking datasets often lack detailed evidence and explanations, compromising the reliability and interpretability of fact-checking methods. To address these gaps, we developed a novel dataset featuring comprehensive annotations for each news item, including veracity labels, the rationales behind these labels, and supporting evidence. This dataset significantly enhances models' ability to accurately identify and explain video content. We also present an explainable automatic framework 3MFact, utilizing Multi-role Multimodal Models for video Fact-checking. Our framework iteratively gathers and synthesizes online evidence to progressively determine the veracity label, generating three key outputs: veracity label, rationale, and supported evidence. We aim for this work to be a pioneering effort, providing robust support for the field of video fact-checking.

Cite

Text

Niu et al. "Pioneering Explainable Video Fact-Checking with a New Dataset and Multi-Role Multimodal Model Approach." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I27.35048

Markdown

[Niu et al. "Pioneering Explainable Video Fact-Checking with a New Dataset and Multi-Role Multimodal Model Approach." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/niu2025aaai-pioneering/) doi:10.1609/AAAI.V39I27.35048

BibTeX

@inproceedings{niu2025aaai-pioneering,
  title     = {{Pioneering Explainable Video Fact-Checking with a New Dataset and Multi-Role Multimodal Model Approach}},
  author    = {Niu, Kaipeng and Xu, Danni and Yang, Bingjian and Liu, Wenxuan and Wang, Zheng},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {28276-28283},
  doi       = {10.1609/AAAI.V39I27.35048},
  url       = {https://mlanthology.org/aaai/2025/niu2025aaai-pioneering/}
}