Monocular Multiview Object Tracking with 3D Aspect Parts
Abstract
In this work, we focus on the problem of tracking objects under significant viewpoint variations, which poses a big challenge to traditional object tracking methods. We propose a novel method to track an object and estimate its continuous pose and part locations under severe viewpoint change. In order to handle the change in topological appearance introduced by viewpoint transformations, we represent objects with 3D aspect parts and model the relationship between viewpoint and 3D aspect parts in a part-based particle filtering framework. Moreover, we show that instance-level online-learned part appearance can be incorporated into our model, which makes it more robust in difficult scenarios with occlusions. Experiments are conducted on a new dataset of challenging YouTube videos and a subset of the KITTI dataset [14] that include significant viewpoint variations, as well as a standard sequence for car tracking. We demonstrate that our method is able to track the 3D aspect parts and the viewpoint of objects accurately despite significant changes in viewpoint.
Cite
Text
Xiang et al. "Monocular Multiview Object Tracking with 3D Aspect Parts." European Conference on Computer Vision, 2014. doi:10.1007/978-3-319-10599-4_15Markdown
[Xiang et al. "Monocular Multiview Object Tracking with 3D Aspect Parts." European Conference on Computer Vision, 2014.](https://mlanthology.org/eccv/2014/xiang2014eccv-monocular/) doi:10.1007/978-3-319-10599-4_15BibTeX
@inproceedings{xiang2014eccv-monocular,
title = {{Monocular Multiview Object Tracking with 3D Aspect Parts}},
author = {Xiang, Yu and Song, Changkyu and Mottaghi, Roozbeh and Savarese, Silvio},
booktitle = {European Conference on Computer Vision},
year = {2014},
pages = {220-235},
doi = {10.1007/978-3-319-10599-4_15},
url = {https://mlanthology.org/eccv/2014/xiang2014eccv-monocular/}
}