Supervoxel-Consistent Foreground Propagation in Video

Abstract

A major challenge in video segmentation is that the foreground object may move quickly in the scene at the same time its appearance and shape evolves over time. While pairwise potentials used in graph-based algorithms help smooth labels between neighboring (super)pixels in space and time, they offer only a myopic view of consistency and can be misled by inter-frame optical flow errors. We propose a higher order supervoxel label consistency potential for semi-supervised foreground segmentation. Given an initial frame with manual annotation for the foreground object, our approach propagates the foreground region through time, leveraging bottom-up supervoxels to guide its estimates towards long-range coherent regions. We validate our approach on three challenging datasets and achieve state-of-the-art results.

Cite

Text

Jain and Grauman. "Supervoxel-Consistent Foreground Propagation in Video." European Conference on Computer Vision, 2014. doi:10.1007/978-3-319-10593-2_43

Markdown

[Jain and Grauman. "Supervoxel-Consistent Foreground Propagation in Video." European Conference on Computer Vision, 2014.](https://mlanthology.org/eccv/2014/jain2014eccv-supervoxel/) doi:10.1007/978-3-319-10593-2_43

BibTeX

@inproceedings{jain2014eccv-supervoxel,
  title     = {{Supervoxel-Consistent Foreground Propagation in Video}},
  author    = {Jain, Suyog Dutt and Grauman, Kristen},
  booktitle = {European Conference on Computer Vision},
  year      = {2014},
  pages     = {656-671},
  doi       = {10.1007/978-3-319-10593-2_43},
  url       = {https://mlanthology.org/eccv/2014/jain2014eccv-supervoxel/}
}