Learning a Background Model for Change Detection
Abstract
Change detection or foreground background segmentation, has been extensively studied in computer vision, as it constitutes the fundamental step for extracting motion information from video frames. In this paper, we present a robust real-time foreground /background segmentation system employing a Chebyshev probability inequality based background model, supported with peripheral and recurrent motion detectors. The system uses shadow detection, and relevance feedback from higher-level object tracking and object classification to further refine the segmentation accuracy. Experimental results on wide range of test videos demonstrate the high performance of the presented method with dynamic backgrounds, camera jitter, cast shadows, as well as thermal video.
Cite
Text
Morde et al. "Learning a Background Model for Change Detection." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2012. doi:10.1109/CVPRW.2012.6238921Markdown
[Morde et al. "Learning a Background Model for Change Detection." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2012.](https://mlanthology.org/cvprw/2012/morde2012cvprw-learning/) doi:10.1109/CVPRW.2012.6238921BibTeX
@inproceedings{morde2012cvprw-learning,
title = {{Learning a Background Model for Change Detection}},
author = {Morde, Ashutosh and Ma, Xiang and Guler, Sadiye},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2012},
pages = {15-20},
doi = {10.1109/CVPRW.2012.6238921},
url = {https://mlanthology.org/cvprw/2012/morde2012cvprw-learning/}
}