Semantically-Enriched 3D Models for Common-Sense Knowledge
Abstract
We identify and connect a set of physical properties to 3D models to create a richly-annotated 3D model dataset with data on physical sizes, static support, attachment surfaces, material compositions, and weights. To collect these physical property priors, we leverage observations of 3D models within 3D scenes and information from images and text. By augmenting 3D models with these properties we create a semantically rich, multi-layered dataset of common indoor objects. We demonstrate the usefulness of these annotations for improving 3D scene synthesis systems, enabling faceted semantic queries into 3D model datasets, and reasoning about how objects can be manipulated by people using weight and static friction estimates.
Cite
Text
Savva et al. "Semantically-Enriched 3D Models for Common-Sense Knowledge." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2015. doi:10.1109/CVPRW.2015.7301289Markdown
[Savva et al. "Semantically-Enriched 3D Models for Common-Sense Knowledge." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2015.](https://mlanthology.org/cvprw/2015/savva2015cvprw-semanticallyenriched/) doi:10.1109/CVPRW.2015.7301289BibTeX
@inproceedings{savva2015cvprw-semanticallyenriched,
title = {{Semantically-Enriched 3D Models for Common-Sense Knowledge}},
author = {Savva, Manolis and Chang, Angel X. and Hanrahan, Pat},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2015},
pages = {24-31},
doi = {10.1109/CVPRW.2015.7301289},
url = {https://mlanthology.org/cvprw/2015/savva2015cvprw-semanticallyenriched/}
}