Hough-Based Tracking of Non-Rigid Objects
Abstract
Online learning has shown to be successful in tracking of previously unknown objects. However, most approaches are limited to a bounding-box representation with fixed aspect ratio. Thus, they provide a less accurate fore- ground/background separation and cannot handle highly non-rigid and articulated objects. This, in turn, increases the amount of noise introduced during online self-training. In this paper, we present a novel tracking-by-detection approach to overcome this limitation based on the generalized Hough-transform. We extend the idea of Hough Forests to the online domain and couple the voting- based detection and back-projection with a rough segmentation based on GrabCut. This significantly reduces the amount of noisy training samples during online learning and thus effectively prevents the tracker from drifting. In the experiments, we demonstrate that our method successfully tracks a variety of previously unknown objects even under heavy non-rigid transformations, partial occlusions, scale changes and rotations. Moreover, we compare our tracker to state-of-the-art methods (both bounding-box- based as well as part-based) and show robust and accurate tracking results on various challenging sequences.
Cite
Text
Godec et al. "Hough-Based Tracking of Non-Rigid Objects." IEEE/CVF International Conference on Computer Vision, 2011. doi:10.1109/ICCV.2011.6126228Markdown
[Godec et al. "Hough-Based Tracking of Non-Rigid Objects." IEEE/CVF International Conference on Computer Vision, 2011.](https://mlanthology.org/iccv/2011/godec2011iccv-hough/) doi:10.1109/ICCV.2011.6126228BibTeX
@inproceedings{godec2011iccv-hough,
title = {{Hough-Based Tracking of Non-Rigid Objects}},
author = {Godec, Martin and Roth, Peter M. and Bischof, Horst},
booktitle = {IEEE/CVF International Conference on Computer Vision},
year = {2011},
pages = {81-88},
doi = {10.1109/ICCV.2011.6126228},
url = {https://mlanthology.org/iccv/2011/godec2011iccv-hough/}
}