Robust Object Tracking via Sparsity-Based Collaborative Model
Abstract
In this paper we propose a robust object tracking algorithm using a collaborative model. As the main challenge for object tracking is to account for drastic appearance change, we propose a robust appearance model that exploits both holistic templates and local representations. We develop a sparsity-based discriminative classifier (SD-C) and a sparsity-based generative model (SGM). In the S-DC module, we introduce an effective method to compute the confidence value that assigns more weights to the foreground than the background. In the SGM module, we propose a novel histogram-based method that takes the spatial information of each patch into consideration with an occlusion handing scheme. Furthermore, the update scheme considers both the latest observations and the original template, thereby enabling the tracker to deal with appearance change effectively and alleviate the drift problem. Numerous experiments on various challenging videos demonstrate that the proposed tracker performs favorably against several state-of-the-art algorithms.
Cite
Text
Zhong et al. "Robust Object Tracking via Sparsity-Based Collaborative Model." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012. doi:10.1109/CVPR.2012.6247882Markdown
[Zhong et al. "Robust Object Tracking via Sparsity-Based Collaborative Model." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012.](https://mlanthology.org/cvpr/2012/zhong2012cvpr-robust/) doi:10.1109/CVPR.2012.6247882BibTeX
@inproceedings{zhong2012cvpr-robust,
title = {{Robust Object Tracking via Sparsity-Based Collaborative Model}},
author = {Zhong, Wei and Lu, Huchuan and Yang, Ming-Hsuan},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2012},
pages = {1838-1845},
doi = {10.1109/CVPR.2012.6247882},
url = {https://mlanthology.org/cvpr/2012/zhong2012cvpr-robust/}
}