EOC-Bench: Can MLLMs Identify, Recall, and Forecast Objects in an Egocentric World?

Abstract

The emergence of multimodal large language models (MLLMs) has driven breakthroughs in egocentric vision applications. These applications necessitate persistent, context-aware understanding of objects, as users interact with tools in dynamic and cluttered environments. However, existing embodied benchmarks primarily focus on static scene exploration, emphasizing object's appearance and spatial attributes while neglecting the assessment of dynamic changes arising from users' interactions. capabilities in object-level spatiotemporal reasoning required for real-world interactions. To address this gap, we introduce EOC-Bench, an innovative benchmark designed to systematically evaluate object-centric embodied cognition in dynamic egocentric scenarios. Specially, EOC-Bench features 3,277 meticulously annotated QA pairs categorized into three temporal categories: Past, Present, and Future, covering 11 fine-grained evaluation dimensions and 3 visual object referencing types. To ensure thorough assessment, we develop a mixed-format human-in-the-loop annotation framework Based on EOC-Bench, we conduct comprehensive evaluations of various proprietary, open-source, and object-level MLLMs. EOC-Bench serves as a crucial tool for advancing the embodied object cognitive capabilities of MLLMs, establishing a robust foundation for developing reliable core models for embodied systems.

Cite

Text

Yuan et al. "EOC-Bench: Can MLLMs Identify, Recall, and Forecast Objects in an Egocentric World?." Advances in Neural Information Processing Systems, 2025.

Markdown

[Yuan et al. "EOC-Bench: Can MLLMs Identify, Recall, and Forecast Objects in an Egocentric World?." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/yuan2025neurips-eocbench/)

BibTeX

@inproceedings{yuan2025neurips-eocbench,
  title     = {{EOC-Bench: Can MLLMs Identify, Recall, and Forecast Objects in an Egocentric World?}},
  author    = {Yuan, Yuqian and Dang, Ronghao and Li, Long and Li, Wentong and Jiao, Dian and Li, Xin and Zhao, Deli and Wang, Fan and Zhang, Wenqiao and Xiao, Jun and Zhuang, Yueting},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/yuan2025neurips-eocbench/}
}