ScanRefer: 3D Object Localization in RGB-D Scans Using Natural Language

Abstract

We introduce the new task of 3D object localization in RGB-D scans using natural language descriptions. As input, we assume a point cloud of a scanned 3D scene along with a free-form description of a specified target object. To address this task, we propose ScanRefer, where the core idea is to learn a fused descriptor from 3D object proposals and encoded sentence embeddings. This learned descriptor then correlates the language expressions with the underlying geometric features of the 3D scan and facilitates the regression of the 3D bounding box of the target object. In order to train and benchmark our method, we introduce a new ScanRefer dataset, containing 46,173 descriptions of 9,943 objects from 703 ScanNet scenes. ScanRefer is the first large-scale effort to perform object localization via natural language expression directly in 3D.

Cite

Text

Chen et al. "ScanRefer: 3D Object Localization in RGB-D Scans Using Natural Language." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58565-5_13

Markdown

[Chen et al. "ScanRefer: 3D Object Localization in RGB-D Scans Using Natural Language." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/chen2020eccv-scanrefer/) doi:10.1007/978-3-030-58565-5_13

BibTeX

@inproceedings{chen2020eccv-scanrefer,
  title     = {{ScanRefer: 3D Object Localization in RGB-D Scans Using Natural Language}},
  author    = {Chen, Dave Zhenyu and Chang, Angel X. and Nießner, Matthias},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58565-5_13},
  url       = {https://mlanthology.org/eccv/2020/chen2020eccv-scanrefer/}
}