R3DS: Reality-Linked 3D Scenes for Panoramic Scene Understanding
Abstract
We introduce the () dataset of synthetic 3D scenes mirroring the real-world scene arrangements from Matterport3D panoramas. Compared to prior work, has more complete and densely populated scenes with objects linked to real-world observations in panoramas. also provides an object support hierarchy, and matching object sets (e.g., same chairs around a dining table) for each scene. Overall, contains 19K objects represented by 3,784 distinct CAD models from over 100 object categories. We demonstrate the effectiveness of on the task. We find that: 1) training on enables better generalization; 2) support relation prediction trained with improves performance compared to heuristically calculated support; and 3) offers a challenging benchmark for future work on panoramic scene understanding.
Cite
Text
Wu et al. "R3DS: Reality-Linked 3D Scenes for Panoramic Scene Understanding." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73036-8_26Markdown
[Wu et al. "R3DS: Reality-Linked 3D Scenes for Panoramic Scene Understanding." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/wu2024eccv-r3ds/) doi:10.1007/978-3-031-73036-8_26BibTeX
@inproceedings{wu2024eccv-r3ds,
title = {{R3DS: Reality-Linked 3D Scenes for Panoramic Scene Understanding}},
author = {Wu, Qirui and Raychaudhuri, Sonia and Ritchie, Daniel and Savva, Manolis and Chang, Angel X},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-73036-8_26},
url = {https://mlanthology.org/eccv/2024/wu2024eccv-r3ds/}
}