Understanding 3D Object Interaction from a Single Image

Abstract

Humans can easily understand a single image as depicting multiple potential objects permitting interaction. We use this skill to plan our interactions with the world and accelerate understanding new objects without engaging in interaction. In this paper, we would like to endow machines with the similar ability, so that intelligent agents can better explore the 3D scene or manipulate objects. Our approach is a transformer-based model that predicts the 3D location, physical properties and affordance of objects. To power this model, we collect a dataset with Internet videos, egocentric videos and indoor images to train and validate our approach. Our model yields strong performance on our data, and generalizes well to robotics data.

Cite

Text

Qian and Fouhey. "Understanding 3D Object Interaction from a Single Image." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01988

Markdown

[Qian and Fouhey. "Understanding 3D Object Interaction from a Single Image." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/qian2023iccv-understanding/) doi:10.1109/ICCV51070.2023.01988

BibTeX

@inproceedings{qian2023iccv-understanding,
  title     = {{Understanding 3D Object Interaction from a Single Image}},
  author    = {Qian, Shengyi and Fouhey, David F.},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {21753-21763},
  doi       = {10.1109/ICCV51070.2023.01988},
  url       = {https://mlanthology.org/iccv/2023/qian2023iccv-understanding/}
}