RoboSpatial: Teaching Spatial Understanding to 2D and 3D Vision-Language Models for Robotics
Abstract
Spatial understanding is essential for robots to perceive, reason about, and interact with their environments. However, current visual language models often rely on general-purpose image datasets that lack robust spatial scene understanding and reference frame comprehension (ego-, world-, or object-centric). To address this gap, we introduce RoboSpatial, a large-scale dataset of real indoor and tabletop environments captured via egocentric images and 3D scans. RoboSpatial provides 1M images, 5k 3D scans, and 3M annotated spatial relationships, enabling both 2D and 3D spatial reasoning. Models trained on RoboSpatial outperform baselines on tasks including spatial affordance prediction, spatial relationship prediction, and robot manipulation.
Cite
Text
Song et al. "RoboSpatial: Teaching Spatial Understanding to 2D and 3D Vision-Language Models for Robotics." ICLR 2025 Workshops: WRL, 2025.Markdown
[Song et al. "RoboSpatial: Teaching Spatial Understanding to 2D and 3D Vision-Language Models for Robotics." ICLR 2025 Workshops: WRL, 2025.](https://mlanthology.org/iclrw/2025/song2025iclrw-robospatial/)BibTeX
@inproceedings{song2025iclrw-robospatial,
title = {{RoboSpatial: Teaching Spatial Understanding to 2D and 3D Vision-Language Models for Robotics}},
author = {Song, Chan Hee and Blukis, Valts and Tremblay, Jonathan and Tyree, Stephen and Su, Yu and Birchfield, Stan},
booktitle = {ICLR 2025 Workshops: WRL},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/song2025iclrw-robospatial/}
}