Segmenting Foreground Objects from a Dynamic Textured Background via a Robust Kalman Filter
Abstract
The algorithm presented in this paper aims to segment the foreground objects in video (e.g., people) given timevarying, textured backgrounds. Examples of time-varying backgrounds include waves on water, clouds moving, trees waving in the wind, automobile traffic, moving crowds, escalators, etc. We have developed a novel foregroundbackground segmentation algorithm that explicitly accounts for the non-stationary nature and clutter-like appearance of many dynamic textures. The dynamic texture is modeled by an Autoregressive Moving Average Model (ARMA). A robust Kalman filter algorithm iteratively estimates the intrinsic appearance of the dynamic texture, as well as the regions of the foreground objects. Preliminary experiments with this method have demonstrated promising results. 1
Cite
Text
Zhong and Sclaroff. "Segmenting Foreground Objects from a Dynamic Textured Background via a Robust Kalman Filter." IEEE/CVF International Conference on Computer Vision, 2003. doi:10.1109/ICCV.2003.1238312Markdown
[Zhong and Sclaroff. "Segmenting Foreground Objects from a Dynamic Textured Background via a Robust Kalman Filter." IEEE/CVF International Conference on Computer Vision, 2003.](https://mlanthology.org/iccv/2003/zhong2003iccv-segmenting/) doi:10.1109/ICCV.2003.1238312BibTeX
@inproceedings{zhong2003iccv-segmenting,
title = {{Segmenting Foreground Objects from a Dynamic Textured Background via a Robust Kalman Filter}},
author = {Zhong, Jing and Sclaroff, Stan},
booktitle = {IEEE/CVF International Conference on Computer Vision},
year = {2003},
pages = {44-50},
doi = {10.1109/ICCV.2003.1238312},
url = {https://mlanthology.org/iccv/2003/zhong2003iccv-segmenting/}
}