Background Subtraction for Freely Moving Cameras
Abstract
Background subtraction algorithms define the background as parts of a scene that are at rest. Traditionally, these algorithms assume a stationary camera, and identify moving objects by detecting areas in a video that change over time. In this paper, we extend the concept of `subtracting' areas at rest to apply to video captured from a freely moving camera. We do not assume that the background is well-approximated by a plane or that the camera center remains stationary during motion. The method operates entirely using 2D image measurements without requiring an explicit 3D reconstruction of the scene. A sparse model of background is built by robustly estimating a compact trajectory basis from trajectories of salient features across the video, and the background is `subtracted' by removing trajectories that lie within the space spanned by the basis. Foreground and background appearance models are then built, and an optimal pixel-wise foreground/background labeling is obtained by efficiently maximizing a posterior function.
Cite
Text
Sheikh et al. "Background Subtraction for Freely Moving Cameras." IEEE/CVF International Conference on Computer Vision, 2009. doi:10.1109/ICCV.2009.5459334Markdown
[Sheikh et al. "Background Subtraction for Freely Moving Cameras." IEEE/CVF International Conference on Computer Vision, 2009.](https://mlanthology.org/iccv/2009/sheikh2009iccv-background/) doi:10.1109/ICCV.2009.5459334BibTeX
@inproceedings{sheikh2009iccv-background,
title = {{Background Subtraction for Freely Moving Cameras}},
author = {Sheikh, Yaser and Javed, Omar and Kanade, Takeo},
booktitle = {IEEE/CVF International Conference on Computer Vision},
year = {2009},
pages = {1219-1225},
doi = {10.1109/ICCV.2009.5459334},
url = {https://mlanthology.org/iccv/2009/sheikh2009iccv-background/}
}