Foreground Segmentation via Dynamic Tree-Structured Sparse RPCA
Abstract
Video analysis often begins with background subtraction which consists of creation of a background model, followed by a regularization scheme. Recent evaluation of representative background subtraction techniques demonstrated that there are still considerable challenges facing these methods. We present a new method in which we regard the image sequence as being made up of the sum of a low-rank background matrix and a dynamic tree-structured sparse outlier matrix and solve the decomposition using our approximated Robust Principal Component Analysis method extended to handle camera motion. Our contribution lies in dynamically estimating the support of the foreground regions via a superpixel generation step, so as to impose spatial coherence on these regions, and to obtain crisp and meaningful foreground regions. These advantages enable our method to outperform state-of-the-art alternatives in three benchmark datasets.
Cite
Text
Ebadi and Izquierdo. "Foreground Segmentation via Dynamic Tree-Structured Sparse RPCA." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46448-0_19Markdown
[Ebadi and Izquierdo. "Foreground Segmentation via Dynamic Tree-Structured Sparse RPCA." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/ebadi2016eccv-foreground/) doi:10.1007/978-3-319-46448-0_19BibTeX
@inproceedings{ebadi2016eccv-foreground,
title = {{Foreground Segmentation via Dynamic Tree-Structured Sparse RPCA}},
author = {Ebadi, Salehe Erfanian and Izquierdo, Ebroul},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {314-329},
doi = {10.1007/978-3-319-46448-0_19},
url = {https://mlanthology.org/eccv/2016/ebadi2016eccv-foreground/}
}