Global-Residual and Local-Boundary Refinement Networks for Rectifying Scene Parsing Predictions
Abstract
Most of existing scene parsing methods suffer from the serious problems of both inconsistent parsing results and object boundary shift. To tackle these problems, we first propose an iterative Global-residual Refinement Network (GRN) through exploiting global contextual information to predict the parsing residuals and iteratively smoothen the inconsistent parsing labels. Furthermore, we propose a Local-boundary Refinement Network (LRN) to learn the position-adaptive propagation coefficients so that local contextual information from neighbors can be optimally captured for refining object boundaries. Finally, we cascade the proposed two refinement networks after a fully residual convolutional neural network within a uniform framework. Extensive experiments on ADE20K and Cityscapes datasets well demonstrate the effectiveness of the two refinement methods for refining scene parsing predictions.
Cite
Text
Zhang et al. "Global-Residual and Local-Boundary Refinement Networks for Rectifying Scene Parsing Predictions." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/479Markdown
[Zhang et al. "Global-Residual and Local-Boundary Refinement Networks for Rectifying Scene Parsing Predictions." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/zhang2017ijcai-global/) doi:10.24963/IJCAI.2017/479BibTeX
@inproceedings{zhang2017ijcai-global,
title = {{Global-Residual and Local-Boundary Refinement Networks for Rectifying Scene Parsing Predictions}},
author = {Zhang, Rui and Tang, Sheng and Lin, Min and Li, Jintao and Yan, Shuicheng},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2017},
pages = {3427-3433},
doi = {10.24963/IJCAI.2017/479},
url = {https://mlanthology.org/ijcai/2017/zhang2017ijcai-global/}
}