Better Feature Tracking Through Subspace Constraints
Abstract
Feature tracking in video is a crucial task in computer vision. Usually, the tracking problem is handled one feature at a time, using a single-feature tracker like the Kanade-Lucas-Tomasi algorithm, or one of its derivatives. While this approach works quite well when dealing with high-quality video and "strong" features, it often falters when faced with dark and noisy video containing low-quality features. We present a framework for jointly tracking a set of features, which enables sharing information between the different features in the scene. We show that our method can be employed to track features for both rigid and non-rigid motions (possibly of few moving bodies) even when some features are occluded. Furthermore, it can be used to significantly improve tracking results in poorly-lit scenes (where there is a mix of good and bad features). Our approach does not require direct modeling of the structure or the motion of the scene, and runs in real time on a single CPU core.
Cite
Text
Poling et al. "Better Feature Tracking Through Subspace Constraints." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.441Markdown
[Poling et al. "Better Feature Tracking Through Subspace Constraints." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/poling2014cvpr-better/) doi:10.1109/CVPR.2014.441BibTeX
@inproceedings{poling2014cvpr-better,
title = {{Better Feature Tracking Through Subspace Constraints}},
author = {Poling, Bryan and Lerman, Gilad and Szlam, Arthur},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2014},
doi = {10.1109/CVPR.2014.441},
url = {https://mlanthology.org/cvpr/2014/poling2014cvpr-better/}
}