Estimation of Human Figure Motion Using Robust Tracking of Articulated Layers

Abstract

We propose a probabilistic method for tracking articulated objects, such as the human figure, across multiple layers in monocular image sequence. In this method, each link of a probabilistic articulated object is assigned to one individual image layer. The layered representation allows us to robustly model the pose and occlusion of object parts during its motion. Appearance of links is described in terms of learned statistics of basic image features, such as color, and geometric models of robust spatial kernels. This results in a highly efficient computational method for inference of the object’s pose. We apply this approach to tracking of the human figure in monocular video sequences. We show that the proposed method, coupled with a learned dynamic model, can lead to a robust articulated object tracker.

Cite

Text

Moon and Pavlovic. "Estimation of Human Figure Motion Using Robust Tracking of Articulated Layers." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2005. doi:10.1109/CVPR.2005.452

Markdown

[Moon and Pavlovic. "Estimation of Human Figure Motion Using Robust Tracking of Articulated Layers." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2005.](https://mlanthology.org/cvprw/2005/moon2005cvprw-estimation/) doi:10.1109/CVPR.2005.452

BibTeX

@inproceedings{moon2005cvprw-estimation,
  title     = {{Estimation of Human Figure Motion Using Robust Tracking of Articulated Layers}},
  author    = {Moon, Kooksang and Pavlovic, Vladimir},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2005},
  pages     = {83},
  doi       = {10.1109/CVPR.2005.452},
  url       = {https://mlanthology.org/cvprw/2005/moon2005cvprw-estimation/}
}