Regional Interactive Image Segmentation Networks
Abstract
The interactive image segmentation model allows users to iteratively add new inputs for refinement until a satisfactory result is finally obtained. Therefore, an ideal interactive segmentation model should learn to capture the user's intention with minimal interaction. However, existing models fail to fully utilize the valuable user input information in the segmentation refinement process and thus offer an unsatisfactory user experience. In order to fully exploit the user-provided information, we propose a new deep framework, called Regional Interactive Segmentation Network (RIS-Net), to expand the field-of-view of the given inputs to capture the local regional information surrounding them for local refinement. Additionally, RIS-Net adopts multiscale global contextual information to augment each local region for improving feature representation. We also introduce click discount factors to develop a novel optimization strategy for more effective end-to-end training. Comprehensive evaluations on four challenging datasets well demonstrate the superiority of the proposed RIS-Net over other state-of-the-art approaches.
Cite
Text
Liew et al. "Regional Interactive Image Segmentation Networks." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.297Markdown
[Liew et al. "Regional Interactive Image Segmentation Networks." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/liew2017iccv-regional/) doi:10.1109/ICCV.2017.297BibTeX
@inproceedings{liew2017iccv-regional,
title = {{Regional Interactive Image Segmentation Networks}},
author = {Liew, Jun Hao and Wei, Yunchao and Xiong, Wei and Ong, Sim-Heng and Feng, Jiashi},
booktitle = {International Conference on Computer Vision},
year = {2017},
doi = {10.1109/ICCV.2017.297},
url = {https://mlanthology.org/iccv/2017/liew2017iccv-regional/}
}