Learning to Recommend Frame for Interactive Video Object Segmentation in the Wild

Abstract

This paper proposes a framework for the interactive video object segmentation (VOS) in the wild where users can choose some frames for annotations iteratively. Then, based on the user annotations, a segmentation algorithm refines the masks. The previous interactive VOS paradigm selects the frame with some worst evaluation metric, and the ground truth is required for calculating the evaluation metric, which is impractical in the testing phase. In contrast, in this paper, we advocate that the frame with the worst evaluation metric may not be exactly the most valuable frame that leads to the most performance improvement across the video. Thus, we formulate the frame selection problem in the interactive VOS as a Markov Decision Process, where an agent is learned to recommend the frame under a deep reinforcement learning framework. The learned agent can automatically determine the most valuable frame, making the interactive setting more practical in the wild. Experimental results on the public datasets show the effectiveness of our learned agent without any changes to the underlying VOS algorithms. Our data, code, and models are available at https://github.com/svip-lab/IVOS-W.

Cite

Text

Yin et al. "Learning to Recommend Frame for Interactive Video Object Segmentation in the Wild." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.01519

Markdown

[Yin et al. "Learning to Recommend Frame for Interactive Video Object Segmentation in the Wild." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/yin2021cvpr-learning/) doi:10.1109/CVPR46437.2021.01519

BibTeX

@inproceedings{yin2021cvpr-learning,
  title     = {{Learning to Recommend Frame for Interactive Video Object Segmentation in the Wild}},
  author    = {Yin, Zhaoyuan and Zheng, Jia and Luo, Weixin and Qian, Shenhan and Zhang, Hanling and Gao, Shenghua},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {15445-15454},
  doi       = {10.1109/CVPR46437.2021.01519},
  url       = {https://mlanthology.org/cvpr/2021/yin2021cvpr-learning/}
}