Real-Time Pose Estimation of Articulated Objects Using Low-Level Motion
Abstract
We present a method that is capable of tracking and estimating pose of articulated objects in real-time. This is achieved by using a bottom-up approach to detect instances of the object in each frame, these detections are then linked together using a high-level a priori motion model. Unlike other approaches that rely on appearance, our method is entirely dependent on motion; initial low-level part detection is based on how a region moves as opposed to its appearance. This work is best described as pictorial structures using motion. A sparse cloud of points extracted using a standard feature tracker are used as observational data, this data contains noise that is not Gaussian in nature but systematic due to tracking errors. Using a probabilistic framework we are able to overcome both corrupt and missing data whilst still inferring new poses from a generative model. Our approach requires no manual initialisation and we show results for a number of complex scenes and different classes of articulated object, this demonstrates both the robustness and versatility of the presented technique.
Cite
Text
Daubney et al. "Real-Time Pose Estimation of Articulated Objects Using Low-Level Motion." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2008. doi:10.1109/CVPR.2008.4587530Markdown
[Daubney et al. "Real-Time Pose Estimation of Articulated Objects Using Low-Level Motion." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2008.](https://mlanthology.org/cvpr/2008/daubney2008cvpr-real/) doi:10.1109/CVPR.2008.4587530BibTeX
@inproceedings{daubney2008cvpr-real,
title = {{Real-Time Pose Estimation of Articulated Objects Using Low-Level Motion}},
author = {Daubney, Ben and Gibson, David P. and Campbell, Neill W.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2008},
doi = {10.1109/CVPR.2008.4587530},
url = {https://mlanthology.org/cvpr/2008/daubney2008cvpr-real/}
}