ScanERU: Interactive 3D Visual Grounding Based on Embodied Reference Understanding

Abstract

Aiming to link natural language descriptions to specific regions in a 3D scene represented as 3D point clouds, 3D visual grounding is a very fundamental task for human-robot interaction. The recognition errors can significantly impact the overall accuracy and then degrade the operation of AI systems. Despite their effectiveness, existing methods suffer from the difficulty of low recognition accuracy in cases of multiple adjacent objects with similar appearance. To address this issue, this work intuitively introduces the human-robot interaction as a cue to facilitate the development of 3D visual grounding. Specifically, a new task termed Embodied Reference Understanding (ERU) is first designed for this concern. Then a new dataset called ScanERU is constructed to evaluate the effectiveness of this idea. Different from existing datasets, our ScanERU dataset is the first to cover semi-synthetic scene integration with textual, real-world visual, and synthetic gestural information. Additionally, this paper formulates a heuristic framework based on attention mechanisms and human body movements to enlighten the research of ERU. Experimental results demonstrate the superiority of the proposed method, especially in the recognition of multiple identical objects. Our codes and dataset are available in the ScanERU repository.

Cite

Text

Lu et al. "ScanERU: Interactive 3D Visual Grounding Based on Embodied Reference Understanding." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I4.28186

Markdown

[Lu et al. "ScanERU: Interactive 3D Visual Grounding Based on Embodied Reference Understanding." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/lu2024aaai-scaneru/) doi:10.1609/AAAI.V38I4.28186

BibTeX

@inproceedings{lu2024aaai-scaneru,
  title     = {{ScanERU: Interactive 3D Visual Grounding Based on Embodied Reference Understanding}},
  author    = {Lu, Ziyang and Pei, Yunqiang and Wang, Guoqing and Li, Peiwei and Yang, Yang and Lei, Yinjie and Shen, Heng Tao},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {3936-3944},
  doi       = {10.1609/AAAI.V38I4.28186},
  url       = {https://mlanthology.org/aaai/2024/lu2024aaai-scaneru/}
}