3D Articulated Motion Estimation from Images

Abstract

This paper presents a new method of motion analysis of articulated objects from feature point correspondences over monocular perspective images without imposing any constraints on motion. The 3D joint positions of an articulated object are estimated within a scale factor using the connection relationship of two links over two or three images. Then, twists and exponential maps are employed to represent the motion of each link. Finally, constraints from image point correspondences are developed to estimate the motion. In the algorithm, the characteristic of articulated motion, i.e., motion correlation among links, is applied to decrease the complexity of the problem and improve the robustness. A point pattern matching algorithm for articulated objects is also discussed in this paper. Simulations and experiments on real images show the correctness and efficiency of the algorithms.

Cite

Text

Zhang and Liu. "3D Articulated Motion Estimation from Images." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2005. doi:10.1109/CVPR.2005.10

Markdown

[Zhang and Liu. "3D Articulated Motion Estimation from Images." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2005.](https://mlanthology.org/cvpr/2005/zhang2005cvpr-d-a/) doi:10.1109/CVPR.2005.10

BibTeX

@inproceedings{zhang2005cvpr-d-a,
  title     = {{3D Articulated Motion Estimation from Images}},
  author    = {Zhang, Xiaoyun and Liu, Yuncai},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2005},
  pages     = {308-314},
  doi       = {10.1109/CVPR.2005.10},
  url       = {https://mlanthology.org/cvpr/2005/zhang2005cvpr-d-a/}
}