V-IRL: Grounding Virtual Intelligence in Real Life
Abstract
There is a sensory gulf between the Earth that humans inhabit and the digital realms in which modern AI agents are created. To develop AI agents that can sense, think, and act as flexibly as humans in real-world settings, it is imperative to bridge the realism gap between the digital and physical worlds. How can we embody agents in an environment as rich and diverse as the one we inhabit, without the constraints imposed by real hardware and control? Towards this end, we introduce : a platform that enables agents to scalably interact with the real world in a virtual yet realistic environment. Our platform serves as a playground for developing agents that can accomplish various practical tasks and as a vast testbed for measuring progress in capabilities spanning perception, decision-making, and interaction with real-world data across the entire globe. All resources will be open-sourced.
Cite
Text
Yang et al. "V-IRL: Grounding Virtual Intelligence in Real Life." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72995-9_3Markdown
[Yang et al. "V-IRL: Grounding Virtual Intelligence in Real Life." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/yang2024eccv-virl/) doi:10.1007/978-3-031-72995-9_3BibTeX
@inproceedings{yang2024eccv-virl,
title = {{V-IRL: Grounding Virtual Intelligence in Real Life}},
author = {Yang, Jihan and Ding, Runyu and Brown, Ellis L and Qi, Xiaojuan and Xie, Saining},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72995-9_3},
url = {https://mlanthology.org/eccv/2024/yang2024eccv-virl/}
}