Flexible Background Subtraction with Self-Balanced Local Sensitivity

Abstract

Most background subtraction approaches offer decent results in baseline scenarios, but adaptive and flexible solutions are still uncommon as many require scenario-specific parameter tuning to achieve optimal performance. In this paper, we introduce a new strategy to tackle this problem that focuses on balancing the inner workings of a non-parametric model based on pixel-level feedback loops. Pixels are modeled using a spatiotemporal feature descriptor for increased sensitivity. Using the video sequences and ground truth annotations of the 2012 and 2014 CVPR Change Detection Workshops, we demonstrate that our approach outperforms all previously ranked methods in the original dataset while achieving good results in the most recent one.

Cite

Text

St-Charles et al. "Flexible Background Subtraction with Self-Balanced Local Sensitivity." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2014. doi:10.1109/CVPRW.2014.67

Markdown

[St-Charles et al. "Flexible Background Subtraction with Self-Balanced Local Sensitivity." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2014.](https://mlanthology.org/cvprw/2014/stcharles2014cvprw-flexible/) doi:10.1109/CVPRW.2014.67

BibTeX

@inproceedings{stcharles2014cvprw-flexible,
  title     = {{Flexible Background Subtraction with Self-Balanced Local Sensitivity}},
  author    = {St-Charles, Pierre-Luc and Bilodeau, Guillaume-Alexandre and Bergevin, Robert},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2014},
  pages     = {414-419},
  doi       = {10.1109/CVPRW.2014.67},
  url       = {https://mlanthology.org/cvprw/2014/stcharles2014cvprw-flexible/}
}