DANCE: A Deep Attentive Contour Model for Efficient Instance Segmentation
Abstract
Contour-based instance segmentation methods are attractive due to their efficiency. However, existing contour-based methods either suffer from lossy representation, complex pipeline or difficulty in model training, resulting in subpar mask accuracy on challenging datasets like MS-COCO. In this work, we propose a novel deep attentive contour model, named DANCE, to achieve better instance segmentation accuracy while remaining good efficiency. To this end, DANCE applies two new designs: attentive contour deformation to refine the quality of segmentation contours and segment-wise matching to ease the model training. Comprehensive experiments demonstrate DANCE excels at deforming the initial contour in a more natural and efficient way towards the real object boundaries. Effectiveness of DANCE is also validated on the COCO dataset, which achieves 38.1% mAP and outperforms all other contour-based instance segmentation models. To the best of our knowledge, DANCE is the first contour-based model that achieves comparable performance to pixel-wise segmentation models. Code is available at https://github.com/lkevinzc/dance.
Cite
Text
Liu et al. "DANCE: A Deep Attentive Contour Model for Efficient Instance Segmentation." Winter Conference on Applications of Computer Vision, 2021.Markdown
[Liu et al. "DANCE: A Deep Attentive Contour Model for Efficient Instance Segmentation." Winter Conference on Applications of Computer Vision, 2021.](https://mlanthology.org/wacv/2021/liu2021wacv-dance/)BibTeX
@inproceedings{liu2021wacv-dance,
title = {{DANCE: A Deep Attentive Contour Model for Efficient Instance Segmentation}},
author = {Liu, Zichen and Liew, Jun Hao and Chen, Xiangyu and Feng, Jiashi},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2021},
pages = {345-354},
url = {https://mlanthology.org/wacv/2021/liu2021wacv-dance/}
}