Foreground Segmentation of Live Videos Using Locally Competing 1SVMs

Abstract

The objective of foreground segmentation is to extract the desired foreground object from input videos. Over the years there have been significant amount of efforts on this topic, nevertheless there still lacks a simple yet effective algorithm that can process live videos of objects with fuzzy boundaries captured by freely moving cameras. This paper presents an algorithm toward this goal. The key idea is to train and maintain two competing one-class support vector machines (1SVMs) at each pixel location, which model local color distributions for foreground and background, respectively. We advocate the usage of two competing local classifiers, as it provides higher discriminative power and allows better handling of ambiguities. As a result, our algorithm can deal with a variety of videos with complex backgrounds and freely moving cameras with minimum user interactions. In addition, by introducing novel acceleration techniques and by exploiting the parallel structure of the algorithm, realtime processing speed is achieved for VGA-sized videos.

Cite

Text

Gong. "Foreground Segmentation of Live Videos Using Locally Competing 1SVMs." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011. doi:10.1109/CVPR.2011.5995394

Markdown

[Gong. "Foreground Segmentation of Live Videos Using Locally Competing 1SVMs." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011.](https://mlanthology.org/cvpr/2011/gong2011cvpr-foreground/) doi:10.1109/CVPR.2011.5995394

BibTeX

@inproceedings{gong2011cvpr-foreground,
  title     = {{Foreground Segmentation of Live Videos Using Locally Competing 1SVMs}},
  author    = {Gong, Minglun},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2011},
  pages     = {2105-2112},
  doi       = {10.1109/CVPR.2011.5995394},
  url       = {https://mlanthology.org/cvpr/2011/gong2011cvpr-foreground/}
}