OCRBench V2: An Improved Benchmark for Evaluating Large Multimodal Models on Visual Text Localization and Reasoning

Abstract

Scoring the Optical Character Recognition (OCR) capabilities of Large Multimodal Models (LMMs) has witnessed growing interest. Existing benchmarks have highlighted the impressive performance of LMMs in text recognition; however, their abilities in certain challenging tasks, such as text localization, handwritten content extraction, and logical reasoning, remain underexplored. To bridge this gap, we introduce OCRBench v2, a large-scale bilingual text-centric benchmark with currently the most comprehensive set of tasks ($4\times$ more tasks than the previous multi-scene benchmark OCRBench), the widest coverage of scenarios ($31$ diverse scenarios), and thorough evaluation metrics, with $10,000$ human-verified question-answering pairs and a high proportion of difficult samples. Moreover, we construct a private test set with $1,500$ manually annotated images. The consistent evaluation trends observed across both public and private test sets validate the OCRBench v2's reliability. After carefully benchmarking state-of-the-art LMMs, we find that most LMMs score below $50$ ($100$ in total) and suffer from five-type limitations, including less frequently encountered text recognition, fine-grained perception, layout perception, complex element parsing, and logical reasoning. The benchmark and evaluation scripts are available at https://github.com/Yuliang-Liu/MultimodalOCR.

Cite

Text

Fu et al. "OCRBench V2: An Improved Benchmark for Evaluating Large Multimodal Models on Visual Text Localization and Reasoning." Advances in Neural Information Processing Systems, 2025.

Markdown

[Fu et al. "OCRBench V2: An Improved Benchmark for Evaluating Large Multimodal Models on Visual Text Localization and Reasoning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/fu2025neurips-ocrbench/)

BibTeX

@inproceedings{fu2025neurips-ocrbench,
  title     = {{OCRBench V2: An Improved Benchmark for Evaluating Large Multimodal Models on Visual Text Localization and Reasoning}},
  author    = {Fu, Ling and Kuang, Zhebin and Song, Jiajun and Huang, Mingxin and Yang, Biao and Li, Yuzhe and Zhu, Linghao and Luo, Qidi and Wang, Xinyu and Lu, Hao and Li, Zhang and Tang, Guozhi and Shan, Bin and Lin, Chunhui and Liu, Qi and Wu, Binghong and Feng, Hao and Liu, Hao and Huang, Can and Tang, Jingqun and Chen, Wei and Jin, Lianwen and Liu, Yuliang and Bai, Xiang},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/fu2025neurips-ocrbench/}
}