MME-VideoOCR: Evaluating OCR-Based Capabilities of Multimodal LLMs in Video Scenarios

Abstract

Multimodal Large Language Models (MLLMs) have achieved considerable accuracy in Optical Character Recognition (OCR) from static images. However, their efficacy in video OCR is significantly diminished due to factors such as motion blur, temporal variations, and visual effects inherent in video content. To provide clearer guidance for training practical MLLMs, we introduce MME-VideoOCR benchmark, which encompasses a comprehensive range of video OCR application scenarios. MME-VideoOCR features 10 task categories comprising 25 individual tasks and spans 44 diverse scenarios. These tasks extend beyond text recognition to incorporate deeper comprehension and reasoning of textual content within videos. The benchmark consists of 1,464 videos with varying resolutions, aspect ratios, and durations, along with 2,000 meticulously curated, manually annotated question-answer pairs. We evaluate 18 state-of-the-art MLLMs on MME-VideoOCR, revealing that even the best-performing model (Gemini-2.5 Pro) achieves only an accuracy of 73.7%. Fine-grained analysis indicates that while existing MLLMs demonstrate strong performance on tasks where relevant texts are contained within a single or few frames, they exhibit limited capability in effectively handling tasks that demand holistic video comprehension. These limitations are especially evident in scenarios that require spatio-temporal reasoning, cross-frame information integration, or resistance to language prior bias. Our findings also highlight the importance of high-resolution visual input and sufficient temporal coverage for reliable OCR in dynamic video scenarios.

Cite

Text

Shi et al. "MME-VideoOCR: Evaluating OCR-Based Capabilities of Multimodal LLMs in Video Scenarios." Advances in Neural Information Processing Systems, 2025.

Markdown

[Shi et al. "MME-VideoOCR: Evaluating OCR-Based Capabilities of Multimodal LLMs in Video Scenarios." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/shi2025neurips-mmevideoocr/)

BibTeX

@inproceedings{shi2025neurips-mmevideoocr,
  title     = {{MME-VideoOCR: Evaluating OCR-Based Capabilities of Multimodal LLMs in Video Scenarios}},
  author    = {Shi, Yang and Wang, Huanqian and Xie, Wulin and Zhang, Huanyao and Zhao, Lijie and Zhang, YiFan and Li, Xinfeng and Fu, Chaoyou and Wen, Zhuoer and Liu, Wenting and Zhang, Zhuoran and Chen, Xinlong and Zeng, Bohan and Yang, Sihan and Guan, Yushuo and Zhang, Zhang and Wang, Liang and Li, Haoxuan and Lin, Zhouchen and Zhang, Yuanxing and Wan, Pengfei and Wang, Haotian and Yang, Wenjing},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/shi2025neurips-mmevideoocr/}
}