RGB-D Scene Labeling with Multimodal Recurrent Neural Networks
Abstract
Recurrent neural networks (RNNs) are able to capture context in an image by modeling long-range semantic dependencies among image units. However, existing methods only utilize RNNs to model dependencies of a single modality (e.g., RGB) for labeling. In this work we extend this single-modal RNNs to multimodal RNNs (MM-RNNs) and apply it to RGB-D scene labeling. Our MM-RNNs are capable of seamlessly modeling dependencies of both RGB and depth modalities, and allow 'memory' sharing across modalities. By sharing 'memory', each modality possesses multiple properties of itself and other modalities, and becomes more discriminative to distinguish pixels. Moreover, we also analyse two simple extensions of single-modal RNNs and demonstrate that our MM-RNNs perform better than both of them. Integrating with convolutional neural networks (CNNs), we build an end-to-end network for RGB-D scene labeling. Extensive experiments on NYU depth V1 and V2 demonstrate the effectiveness of MM-RNNs.
Cite
Text
Fan et al. "RGB-D Scene Labeling with Multimodal Recurrent Neural Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017. doi:10.1109/CVPRW.2017.31Markdown
[Fan et al. "RGB-D Scene Labeling with Multimodal Recurrent Neural Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017.](https://mlanthology.org/cvprw/2017/fan2017cvprw-rgbd/) doi:10.1109/CVPRW.2017.31BibTeX
@inproceedings{fan2017cvprw-rgbd,
title = {{RGB-D Scene Labeling with Multimodal Recurrent Neural Networks}},
author = {Fan, Heng and Mei, Xue and Prokhorov, Danil V. and Ling, Haibin},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2017},
pages = {203-211},
doi = {10.1109/CVPRW.2017.31},
url = {https://mlanthology.org/cvprw/2017/fan2017cvprw-rgbd/}
}