NS3D: Neuro-Symbolic Grounding of 3D Objects and Relations
Abstract
Grounding object properties and relations in 3D scenes is a prerequisite for a wide range of artificial intelligence tasks, such as visually grounded dialogues and embodied manipulation. However, the variability of the 3D domain induces two fundamental challenges: 1) the expense of labeling and 2) the complexity of 3D grounded language. Hence, essential desiderata for models are to be data-efficient, generalize to different data distributions and tasks with unseen semantic forms, as well as ground complex language semantics (e.g., view-point anchoring and multi-object reference). To address these challenges, we propose NS3D, a neuro-symbolic framework for 3D grounding. NS3D translates language into programs with hierarchical structures by leveraging large language-to-code models. Different functional modules in the programs are implemented as neural networks. Notably, NS3D extends prior neuro-symbolic visual reasoning methods by introducing functional modules that effectively reason about high-arity relations (i.e., relations among more than two objects), key in disambiguating objects in complex 3D scenes. Modular and compositional architecture enables NS3D to achieve state-of-the-art results on the ReferIt3D view-dependence task, a 3D referring expression comprehension benchmark. Importantly, NS3D shows significantly improved performance on settings of data-efficiency and generalization, and demonstrate zero-shot transfer to an unseen 3D question-answering task.
Cite
Text
Hsu et al. "NS3D: Neuro-Symbolic Grounding of 3D Objects and Relations." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00257Markdown
[Hsu et al. "NS3D: Neuro-Symbolic Grounding of 3D Objects and Relations." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/hsu2023cvpr-ns3d/) doi:10.1109/CVPR52729.2023.00257BibTeX
@inproceedings{hsu2023cvpr-ns3d,
title = {{NS3D: Neuro-Symbolic Grounding of 3D Objects and Relations}},
author = {Hsu, Joy and Mao, Jiayuan and Wu, Jiajun},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {2614-2623},
doi = {10.1109/CVPR52729.2023.00257},
url = {https://mlanthology.org/cvpr/2023/hsu2023cvpr-ns3d/}
}