Indoor Semantic Segmentation Using Depth Information
Abstract
This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5%. We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.
Cite
Text
Couprie et al. "Indoor Semantic Segmentation Using Depth Information." International Conference on Learning Representations, 2013.Markdown
[Couprie et al. "Indoor Semantic Segmentation Using Depth Information." International Conference on Learning Representations, 2013.](https://mlanthology.org/iclr/2013/couprie2013iclr-indoor/)BibTeX
@inproceedings{couprie2013iclr-indoor,
title = {{Indoor Semantic Segmentation Using Depth Information}},
author = {Couprie, Camille and Farabet, Clément and Najman, Laurent and LeCun, Yann},
booktitle = {International Conference on Learning Representations},
year = {2013},
url = {https://mlanthology.org/iclr/2013/couprie2013iclr-indoor/}
}