Warping Background Subtraction

Abstract

We present a background model that differentiates between background motion and foreground objects. Unlike most models that represent the variability of pixel intensity at a particular location in the image, we model the underlying warping of pixel locations arising from background motion. The background is modeled as a set of warping layers, where at any given time, different layers may be visible due to the motion of an occluding layer. Foreground regions are thus defined as those that cannot be modeled by some composition of some warping of these background layers. We illustrate this concept by first reducing the possible warps to those where the pixels are restricted to displacements within a spatial neighborhood, and then learning the appropriate size of that spatial neighborhood. Then, we show how changes in intensity/color histograms of pixel neighborhoods can be used to discriminate foreground and background regions. We find that this approach compares favorably with the state of the art, while requiring less computation.

Cite

Text

Ko et al. "Warping Background Subtraction." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010. doi:10.1109/CVPR.2010.5539813

Markdown

[Ko et al. "Warping Background Subtraction." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2010.](https://mlanthology.org/cvpr/2010/ko2010cvpr-warping/) doi:10.1109/CVPR.2010.5539813

BibTeX

@inproceedings{ko2010cvpr-warping,
  title     = {{Warping Background Subtraction}},
  author    = {Ko, Teresa and Soatto, Stefano and Estrin, Deborah},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2010},
  pages     = {1331-1338},
  doi       = {10.1109/CVPR.2010.5539813},
  url       = {https://mlanthology.org/cvpr/2010/ko2010cvpr-warping/}
}