Feedforward Semantic Segmentation with Zoom-Out Features
Abstract
We introduce a purely feed-forward architecture for semantic segmentation. We map small image elements (superpixels) to rich feature representations extracted from a sequence of nested regions of increasing extent. These regions are obtained by "zooming out" from the superpixel all the way to scene-level resolution. This approach exploits statistical structure in the image and in the label space without setting up explicit structured prediction mechanisms, and thus avoids complex and expensive inference. Instead superpixels are classified by a feedforward multilayer network. Our architecture achieves 69.6% average accuracy on the PASCAL VOC 2012 test set, and 86.1 pixel accuracy on Stanford Background Dataset.
Cite
Text
Mostajabi et al. "Feedforward Semantic Segmentation with Zoom-Out Features." Conference on Computer Vision and Pattern Recognition, 2015. doi:10.1109/CVPR.2015.7298959Markdown
[Mostajabi et al. "Feedforward Semantic Segmentation with Zoom-Out Features." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/mostajabi2015cvpr-feedforward/) doi:10.1109/CVPR.2015.7298959BibTeX
@inproceedings{mostajabi2015cvpr-feedforward,
title = {{Feedforward Semantic Segmentation with Zoom-Out Features}},
author = {Mostajabi, Mohammadreza and Yadollahpour, Payman and Shakhnarovich, Gregory},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2015},
doi = {10.1109/CVPR.2015.7298959},
url = {https://mlanthology.org/cvpr/2015/mostajabi2015cvpr-feedforward/}
}