A Multi-Transformational Model for Background Subtraction with Moving Cameras
Abstract
We introduce a new approach to perform background subtraction in moving camera scenarios. Unlike previous treatments of the problem, we do not restrict the camera motion or the scene geometry. The proposed approach relies on Bayesian selection of the transformation that best describes the geometric relation between consecutive frames. Based on the selected transformation, we propagate a set of learned background and foreground appearance models using a single or a series of homography transforms. The propagated models are subjected to MAP-MRF optimization framework that combines motion, appearance, spatial, and temporal cues; the optimization process provides the final background/foreground labels. Extensive experimental evaluation with challenging videos shows that the proposed method outperforms the baseline and state-of-the-art methods in most cases.
Cite
Text
Zamalieva et al. "A Multi-Transformational Model for Background Subtraction with Moving Cameras." European Conference on Computer Vision, 2014. doi:10.1007/978-3-319-10590-1_52Markdown
[Zamalieva et al. "A Multi-Transformational Model for Background Subtraction with Moving Cameras." European Conference on Computer Vision, 2014.](https://mlanthology.org/eccv/2014/zamalieva2014eccv-multi/) doi:10.1007/978-3-319-10590-1_52BibTeX
@inproceedings{zamalieva2014eccv-multi,
title = {{A Multi-Transformational Model for Background Subtraction with Moving Cameras}},
author = {Zamalieva, Daniya and Yilmaz, Alper and Davis, James W.},
booktitle = {European Conference on Computer Vision},
year = {2014},
pages = {803-817},
doi = {10.1007/978-3-319-10590-1_52},
url = {https://mlanthology.org/eccv/2014/zamalieva2014eccv-multi/}
}