Video Object Segmentation with Referring Expressions

Abstract

Most semi-supervised video object segmentation methods rely on a pixel-accurate mask of a target object provided for the first video frame. However, obtaining a detailed mask is expensive and time-consuming. In this work we explore a more practical and natural way of identifying a target object by employing language referring expressions. Leveraging recent advances of language grounding models designed for images, we propose an approach to extend them to video data, ensuring temporally coherent predictions. To evaluate our approach we augment the popular video object segmentation benchmarks, $\text {DAVIS}_{\text {16}}$ and $\text {DAVIS}_{\text {17}}$ , with language descriptions of target objects. We show that our approach performs on par with the methods which have access to the object mask on $\text {DAVIS}_{\text {16}}$ and is competitive to methods using scribbles on challenging $\text {DAVIS}_{\text {17}}$ .

Cite

Text

Khoreva et al. "Video Object Segmentation with Referring Expressions." European Conference on Computer Vision Workshops, 2018. doi:10.1007/978-3-030-11018-5_2

Markdown

[Khoreva et al. "Video Object Segmentation with Referring Expressions." European Conference on Computer Vision Workshops, 2018.](https://mlanthology.org/eccvw/2018/khoreva2018eccvw-video/) doi:10.1007/978-3-030-11018-5_2

BibTeX

@inproceedings{khoreva2018eccvw-video,
  title     = {{Video Object Segmentation with Referring Expressions}},
  author    = {Khoreva, Anna and Rohrbach, Anna and Schiele, Bernt},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2018},
  pages     = {7-12},
  doi       = {10.1007/978-3-030-11018-5_2},
  url       = {https://mlanthology.org/eccvw/2018/khoreva2018eccvw-video/}
}