Understanding Real World Indoor Scenes with Synthetic Data

Abstract

Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also highlighted the need for enormous quantity of supervised data --- performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when considering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtually unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-the-art RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset.

Cite

Text

Handa et al. "Understanding Real World Indoor Scenes with Synthetic Data." Conference on Computer Vision and Pattern Recognition, 2016.

Markdown

[Handa et al. "Understanding Real World Indoor Scenes with Synthetic Data." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/handa2016cvpr-understanding/)

BibTeX

@inproceedings{handa2016cvpr-understanding,
  title     = {{Understanding Real World Indoor Scenes with Synthetic Data}},
  author    = {Handa, Ankur and Patraucean, Viorica and Badrinarayanan, Vijay and Stent, Simon and Cipolla, Roberto},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2016},
  url       = {https://mlanthology.org/cvpr/2016/handa2016cvpr-understanding/}
}