RelationField: Relate Anything in Radiance Fields
Abstract
Neural radiance fields are an emerging 3D scene representation and recently even been extended to learn features for scene understanding by distilling open-vocabulary features from vision-language models. However, current method primarily focus on object-centric representations, supporting object segmentation or detection, while understanding semantic relationships between objects remains largely unexplored. To address this gap, we propose RelationField, the first method to extract inter-object relationships directly from neural radiance fields. RelationField represents relationships between objects as pairs of rays within a neural radiance field, effectively extending its formulation to include implicit relationship queries. To teach RelationField complex, open-vocabulary relationships, relationship knowledge is distilled from multi-modal LLMs. To evaluate RelationField, we solve open-vocabulary 3D scene graph generation tasks and relationship-guided instance segmentation, achieving state-of-the-art performance in both tasks. See the project website at relationfield.github.io.
Cite
Text
Koch et al. "RelationField: Relate Anything in Radiance Fields." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.02022Markdown
[Koch et al. "RelationField: Relate Anything in Radiance Fields." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/koch2025cvpr-relationfield/) doi:10.1109/CVPR52734.2025.02022BibTeX
@inproceedings{koch2025cvpr-relationfield,
title = {{RelationField: Relate Anything in Radiance Fields}},
author = {Koch, Sebastian and Wald, Johanna and Colosi, Mirco and Vaskevicius, Narunas and Hermosilla, Pedro and Tombari, Federico and Ropinski, Timo},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2025},
pages = {21706-21716},
doi = {10.1109/CVPR52734.2025.02022},
url = {https://mlanthology.org/cvpr/2025/koch2025cvpr-relationfield/}
}