Video Segmentation with Just a Few Strokes

Abstract

As the use of videos is becoming more popular in computer vision, the need for annotated video datasets increases. Such datasets are required either as training data or simply as ground truth for benchmark datasets. A particular challenge in video segmentation is due to disocclusions, which hamper frame-to-frame propagation, in conjunction with non-moving objects. We show that a combination of motion from point trajectories, as known from motion segmentation, along with minimal supervision can largely help solve this problem. Moreover, we integrate a new constraint that enforces consistency of the color distribution in successive frames. We quantify user interaction effort with respect to segmentation quality on challenging ego motion videos. We compare our approach to a diverse set of algorithms in terms of user effort and in terms of performance on common video segmentation benchmarks.

Cite

Text

Nagaraja et al. "Video Segmentation with Just a Few Strokes." International Conference on Computer Vision, 2015. doi:10.1109/ICCV.2015.370

Markdown

[Nagaraja et al. "Video Segmentation with Just a Few Strokes." International Conference on Computer Vision, 2015.](https://mlanthology.org/iccv/2015/nagaraja2015iccv-video/) doi:10.1109/ICCV.2015.370

BibTeX

@inproceedings{nagaraja2015iccv-video,
  title     = {{Video Segmentation with Just a Few Strokes}},
  author    = {Nagaraja, Naveen Shankar and Schmidt, Frank R. and Brox, Thomas},
  booktitle = {International Conference on Computer Vision},
  year      = {2015},
  doi       = {10.1109/ICCV.2015.370},
  url       = {https://mlanthology.org/iccv/2015/nagaraja2015iccv-video/}
}