Perceiving and Acting in First-Person: A Dataset and Benchmark for Egocentric Human-Object-Human Interactions
Abstract
Learning action models from real-world human-centric interaction datasets is important towards building general-purpose intelligent assistants with efficiency. However, most existing datasets only offer specialist interaction category and ignore that AI assistants perceive and act based on first-person acquisition. We urge that both the generalist interaction knowledge and egocentric modality are indispensable. In this paper, we embed the manual-assisted task into a vision-language-action framework, where the assistant provides services to the instructor following egocentric vision and commands. With our hybrid RGB-MoCap system, pairs of assistants and instructors engage with multiple objects and the scene following GPT-generated scripts. Under this setting, we accomplish InterVLA, the first large-scale human-object-human interaction dataset with 11.4 hours and 1.2M frames of multimodal data, spanning 2 egocentric and 5 exocentric videos, accurate human/object motions and verbal commands. Furthermore, we establish novel benchmarks on egocentric human motion estimation, interaction synthesis, and interaction prediction with comprehensive analysis. We believe that our InterVLA testbed and the benchmarks will foster future works on building AI agents in the physical world.
Cite
Text
Xu et al. "Perceiving and Acting in First-Person: A Dataset and Benchmark for Egocentric Human-Object-Human Interactions." International Conference on Computer Vision, 2025.Markdown
[Xu et al. "Perceiving and Acting in First-Person: A Dataset and Benchmark for Egocentric Human-Object-Human Interactions." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/xu2025iccv-perceiving/)BibTeX
@inproceedings{xu2025iccv-perceiving,
title = {{Perceiving and Acting in First-Person: A Dataset and Benchmark for Egocentric Human-Object-Human Interactions}},
author = {Xu, Liang and Yang, Chengqun and Lin, Zili and Xu, Fei and Liu, Yifan and Xu, Congsheng and Zhang, Yiyi and Qin, Jie and Sheng, Xingdong and Liu, Yunhui and Jin, Xin and Yan, Yichao and Zeng, Wenjun and Yang, Xiaokang},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {12535-12548},
url = {https://mlanthology.org/iccv/2025/xu2025iccv-perceiving/}
}