Grounding 3D Object Affordance from 2D Interactions in Images

Abstract

Grounding 3D object affordance seeks to locate objects' "action possibilities" regions in the 3D space, which serves as a link between perception and operation for embodied agents. Existing studies primarily focus on connecting visual affordances with geometry structures, e.g., relying on annotations to declare interactive regions of interest on the object and establishing a mapping between the regions and affordances. However, the essence of learning object affordance is to understand how to use it, and the manner that detaches interactions is limited in generalization. Normally, humans possess the ability to perceive object affordances in the physical world through demonstration images or videos. Motivated by this, we introduce a novel task setting: grounding 3D object affordance from 2D interactions in images, which faces the challenge of anticipating affordance through interactions of different sources. To address this problem, we devise a novel Interaction-driven 3D Affordance Grounding Network (IAG), which aligns the region feature of objects from different sources and models the interactive contexts for 3D object affordance grounding. Besides, we collect a Point-Image Affordance Dataset (PIAD) to support the proposed task. Comprehensive experiments on PIAD demonstrate the reliability of the proposed task and the superiority of our method. The project is available at https://github.com/yyvhang/IAGNet.

Cite

Text

Yang et al. "Grounding 3D Object Affordance from 2D Interactions in Images." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01001

Markdown

[Yang et al. "Grounding 3D Object Affordance from 2D Interactions in Images." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/yang2023iccv-grounding/) doi:10.1109/ICCV51070.2023.01001

BibTeX

@inproceedings{yang2023iccv-grounding,
  title     = {{Grounding 3D Object Affordance from 2D Interactions in Images}},
  author    = {Yang, Yuhang and Zhai, Wei and Luo, Hongchen and Cao, Yang and Luo, Jiebo and Zha, Zheng-Jun},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {10905-10915},
  doi       = {10.1109/ICCV51070.2023.01001},
  url       = {https://mlanthology.org/iccv/2023/yang2023iccv-grounding/}
}