Stacked U-Nets for Ground Material Segmentation in Remote Sensing Imagery
Abstract
We present a semantic segmentation algorithm for RGB remote sensing images. Our method is based on the Dilated Stacked U-Nets architecture. This state-of-the-art method has been shown to have good performance in other applications. We perform additional post-processing by blending image tiles and degridding the result. Our method gives competitive results on the DeepGlobe dataset.
Cite
Text
Ghosh et al. "Stacked U-Nets for Ground Material Segmentation in Remote Sensing Imagery." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018. doi:10.1109/CVPRW.2018.00047Markdown
[Ghosh et al. "Stacked U-Nets for Ground Material Segmentation in Remote Sensing Imagery." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018.](https://mlanthology.org/cvprw/2018/ghosh2018cvprw-stacked/) doi:10.1109/CVPRW.2018.00047BibTeX
@inproceedings{ghosh2018cvprw-stacked,
title = {{Stacked U-Nets for Ground Material Segmentation in Remote Sensing Imagery}},
author = {Ghosh, Arthita and Ehrlich, Max and Shah, Sohil and Davis, Larry S. and Chellappa, Rama},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2018},
pages = {257-261},
doi = {10.1109/CVPRW.2018.00047},
url = {https://mlanthology.org/cvprw/2018/ghosh2018cvprw-stacked/}
}