Dysca: A Dynamic and Scalable Benchmark for Evaluating Perception Ability of LVLMs

Abstract

Currently many benchmarks have been proposed to evaluate the perception ability of the Large Vision-Language Models (LVLMs). However, most benchmarks conduct questions by selecting images from existing datasets, resulting in the potential data leakage. Besides, these benchmarks merely focus on evaluating LVLMs on the realistic style images and clean scenarios, leaving the multi-stylized images and noisy scenarios unexplored. In response to these challenges, we propose a dynamic and scalable benchmark named Dysca for evaluating LVLMs by leveraging synthesis images. Specifically, we leverage Stable Diffusion and design a rule-based method to dynamically generate novel images, questions and the corresponding answers. We consider 51 kinds of image styles and evaluate the perception capability in 20 subtasks. Moreover, we conduct evaluations under 4 scenarios (i.e., Clean, Corruption, Print Attacking and Adversarial Attacking) and 3 question types (i.e., Multi-choices, True-or-false and Free-form). Thanks to the generative paradigm, Dysca serves as a scalable benchmark for easily adding new subtasks and scenarios. A total of 24 advanced open-source LVLMs and 2 close-source LVLMs are evaluated on Dysca, revealing the drawbacks of current LVLMs. The benchmark is released in anonymous github page \url{https://github.com/Benchmark-Dysca/Dysca}.

Cite

Text

Zhang et al. "Dysca: A Dynamic and Scalable Benchmark for Evaluating Perception Ability of LVLMs." International Conference on Learning Representations, 2025.

Markdown

[Zhang et al. "Dysca: A Dynamic and Scalable Benchmark for Evaluating Perception Ability of LVLMs." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhang2025iclr-dysca/)

BibTeX

@inproceedings{zhang2025iclr-dysca,
  title     = {{Dysca: A Dynamic and Scalable Benchmark for Evaluating Perception Ability of LVLMs}},
  author    = {Zhang, Jie and Wang, Zhongqi and Lei, Mengqi and Yuan, Zheng and Yan, Bei and Shan, Shiguang and Chen, Xilin},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/zhang2025iclr-dysca/}
}