D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding
Abstract
Recent work on dense captioning and visual grounding in 3D have achieved impressive results. Despite developments in both areas, the limited amount of available 3D vision-language data causes overfitting issues for 3D visual grounding and 3D dense captioning methods. Also, how to discriminatively describe objects in complex 3D environments is not fully studied yet. To address these challenges, we present D3Net, an end-to-end neural speaker-listener architecture that can detect, describe and discriminate. Our D3Net unifies dense captioning and visual grounding in 3D in a self-critical manner. This self-critical property of D3Net encourages generation of discriminative object captions and enables semi-supervised training on scan data with partially annotated descriptions. Our method outperforms SOTA methods in both tasks on the ScanRefer dataset, surpassing the SOTA 3D dense captioning method by a significant margin.
Cite
Text
Chen et al. "D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19824-3_29Markdown
[Chen et al. "D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/chen2022eccv-d3net/) doi:10.1007/978-3-031-19824-3_29BibTeX
@inproceedings{chen2022eccv-d3net,
title = {{D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding}},
author = {Chen, Zhenyu and Wu, Qirui and Nießner, Matthias and Chang, Angel X.},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19824-3_29},
url = {https://mlanthology.org/eccv/2022/chen2022eccv-d3net/}
}