Object-Based Multiple Foreground Video Co-Segmentation

Abstract

We present a video co-segmentation method that uses category-independent object proposals as its basic element and can extract multiple foreground objects in a video set. The use of object elements overcomes limitations of low-level feature representations in separating complex foregrounds and backgrounds. We formulate object-based co-segmentation as a co-selection graph in which regions with foreground-like characteristics are favored while also accounting for intra-video and inter-video foreground coherence. To handle multiple foreground objects, we expand the co-selection graph model into a proposed multi-state selection graph model (MSG) that optimizes the segmentations of different objects jointly. This extension into the MSG can be applied not only to our co-selection graph, but also can be used to turn any standard graph model into a multi-state selection solution that can be optimized directly by the existing energy minimization techniques. Our experiments show that our object-based multiple foreground video co-segmentation method (ObMiC) compares well to related techniques on both single and multiple foreground cases.

Cite

Text

Fu et al. "Object-Based Multiple Foreground Video Co-Segmentation." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.405

Markdown

[Fu et al. "Object-Based Multiple Foreground Video Co-Segmentation." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/fu2014cvpr-objectbased/) doi:10.1109/CVPR.2014.405

BibTeX

@inproceedings{fu2014cvpr-objectbased,
  title     = {{Object-Based Multiple Foreground Video Co-Segmentation}},
  author    = {Fu, Huazhu and Xu, Dong and Zhang, Bao and Lin, Stephen},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2014},
  doi       = {10.1109/CVPR.2014.405},
  url       = {https://mlanthology.org/cvpr/2014/fu2014cvpr-objectbased/}
}