Combining Spatial and Temporal Priors for Articulated Human Tracking with Online Learning
Abstract
We study articulated human tracking by combining spatial and temporal priors in an integrated online learning and inference framework, where body parts can be localized and segmented simultaneously. The temporal prior is represented by the motion trajectory in a low dimensional latent space learned from tracking history, and it predicts the configuration of each body part for the next frame. The spatial prior is encoded by a star-structured graphical model and embedded in the temporal prior, and it can be constructed ¿on-the-fly¿ from the predicted pose and used to evaluate and correct the prediction by assembling part detection results. Both temporal and spatial priors can be online learned incrementally through the Back Constrained-Gaussian Process Latent Variable Model (BC-GPLVM) that involves a temporal sliding window for online learning. Experiments show that the proposed algorithm can achieve accurate and robust tracking results for different walking subjects with significant appearance and motion variability.
Cite
Text
Chen and Fan. "Combining Spatial and Temporal Priors for Articulated Human Tracking with Online Learning." IEEE/CVF International Conference on Computer Vision Workshops, 2009. doi:10.1109/ICCVW.2009.5457633Markdown
[Chen and Fan. "Combining Spatial and Temporal Priors for Articulated Human Tracking with Online Learning." IEEE/CVF International Conference on Computer Vision Workshops, 2009.](https://mlanthology.org/iccvw/2009/chen2009iccvw-combining/) doi:10.1109/ICCVW.2009.5457633BibTeX
@inproceedings{chen2009iccvw-combining,
title = {{Combining Spatial and Temporal Priors for Articulated Human Tracking with Online Learning}},
author = {Chen, Cheng and Fan, Guoliang},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2009},
pages = {719-726},
doi = {10.1109/ICCVW.2009.5457633},
url = {https://mlanthology.org/iccvw/2009/chen2009iccvw-combining/}
}