Deep Interactive Object Selection

Abstract

Interactive object selection is a very important research problem and has many applications. Previous algorithms require substantial user interactions to estimate the foreground and background distributions. In this paper, we present a novel deep-learning-based algorithm which has much better understanding of objectness and can reduce user interactions to just a few clicks. Our algorithm transforms user-provided positive and negative clicks into two Euclidean distance maps which are then concatenated with the RBG channels of images to compose (image, user interactions) pairs. We generate many of such pairs by combining several random sampling strategies to model users' click patterns and use them to finetune deep Fully Convolutional Networks (FCNs). Finally the output probability maps of our FCN-8s model is integrated with graph cut optimization to refine the boundary segments. Our model is trained on the PASCAL segmentation dataset and evaluated on other datasets with different object classes. Experimental results on both seen and unseen objects clearly demonstrate that our algorithm has a good generalization ability and is superior to all existing interactive object selection approaches.

Cite

Text

Xu et al. "Deep Interactive Object Selection." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.47

Markdown

[Xu et al. "Deep Interactive Object Selection." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/xu2016cvpr-deep/) doi:10.1109/CVPR.2016.47

BibTeX

@inproceedings{xu2016cvpr-deep,
  title     = {{Deep Interactive Object Selection}},
  author    = {Xu, Ning and Price, Brian and Cohen, Scott and Yang, Jimei and Huang, Thomas S.},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2016},
  doi       = {10.1109/CVPR.2016.47},
  url       = {https://mlanthology.org/cvpr/2016/xu2016cvpr-deep/}
}