An Adaptive Appearance Model Approach for Model-Based Articulated Object Tracking
Abstract
The detection and tracking of three-dimensional human body models has progressed rapidly but successful approaches typically rely on accurate foreground silhouettes obtained using background segmentation. There are many practical applications where such information is imprecise. Here we develop a new image likelihood function based on the visual appearance of the subject being tracked. We propose a robust, adaptive, appearance model based on the Wandering-Stable-Lost framework extended to the case of articulated body parts. The method models appearance using a mixture model that includes an adaptive template, frame-to-frame matching and an outlier process. We employ an annealed particle filtering algorithm for inference and take advantage of the 3D body model to predict selfocclusion and improve pose estimation accuracy. Quantitative tracking results are presented for a walking sequence with a 180 degree turn, captured with four synchronized and calibrated cameras and containing significant appearance changes and self-occlusion in each view. 1.
Cite
Text
Balan and Black. "An Adaptive Appearance Model Approach for Model-Based Articulated Object Tracking." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2006. doi:10.1109/CVPR.2006.52Markdown
[Balan and Black. "An Adaptive Appearance Model Approach for Model-Based Articulated Object Tracking." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2006.](https://mlanthology.org/cvpr/2006/balan2006cvpr-adaptive/) doi:10.1109/CVPR.2006.52BibTeX
@inproceedings{balan2006cvpr-adaptive,
title = {{An Adaptive Appearance Model Approach for Model-Based Articulated Object Tracking}},
author = {Balan, Alexandru O. and Black, Michael J.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2006},
pages = {758-765},
doi = {10.1109/CVPR.2006.52},
url = {https://mlanthology.org/cvpr/2006/balan2006cvpr-adaptive/}
}