Multi-Modal Unsupervised Feature Learning for RGB-D Scene Labeling
Abstract
Most of the existing approaches for RGB-D indoor scene labeling employ hand-crafted features for each modality independently and combine them in a heuristic manner. There has been some attempt on directly learning features from raw RGB-D data, but the performance is not satisfactory. In this paper, we adapt the unsupervised feature learning technique for RGB-D labeling as a multi-modality learning problem. Our learning framework performs feature learning and feature encoding simultaneously which significantly boosts the performance. By stacking basic learning structure, higher-level features are derived and combined with lower-level features for better representing RGB-D data. Experimental results on the benchmark NYU depth dataset show that our method achieves competitive performance, compared with state-of-the-art.
Cite
Text
Wang et al. "Multi-Modal Unsupervised Feature Learning for RGB-D Scene Labeling." European Conference on Computer Vision, 2014. doi:10.1007/978-3-319-10602-1_30Markdown
[Wang et al. "Multi-Modal Unsupervised Feature Learning for RGB-D Scene Labeling." European Conference on Computer Vision, 2014.](https://mlanthology.org/eccv/2014/wang2014eccv-multi/) doi:10.1007/978-3-319-10602-1_30BibTeX
@inproceedings{wang2014eccv-multi,
title = {{Multi-Modal Unsupervised Feature Learning for RGB-D Scene Labeling}},
author = {Wang, Anran and Lu, Jiwen and Wang, Gang and Cai, Jianfei and Cham, Tat-Jen},
booktitle = {European Conference on Computer Vision},
year = {2014},
pages = {453-467},
doi = {10.1007/978-3-319-10602-1_30},
url = {https://mlanthology.org/eccv/2014/wang2014eccv-multi/}
}