Complex Non-Rigid Motion 3D Reconstruction by Union of Subspaces

Abstract

The task of estimating complex non-rigid 3D motion through a monocular camera is of increasing interest to the wider scientific community. Assuming one has the 2D point tracks of the non-rigid object in question, the vision community refers to this problem as Non-Rigid Structure from Motion (NRSfM). In this paper we make two contributions. First, we demonstrate empirically that the current state of the art approach to NRSfM (i.e. Dai et al. [5]) exhibits poor reconstruction performance on complex motion (i.e motions involving a sequence of primitive actions such as walk, sit and stand involving a human object). Second, we propose that this limitation can be circumvented by modeling complex motion as a union of subspaces. This does not naturally occur in Dai et al.'s approach which instead makes a less compact summation of subspaces assumption. Experiments on both synthetic and real videos illustrate the benefits of our approach for the complex nonrigid motion analysis.

Cite

Text

Zhu et al. "Complex Non-Rigid Motion 3D Reconstruction by Union of Subspaces." Conference on Computer Vision and Pattern Recognition, 2014. doi:10.1109/CVPR.2014.200

Markdown

[Zhu et al. "Complex Non-Rigid Motion 3D Reconstruction by Union of Subspaces." Conference on Computer Vision and Pattern Recognition, 2014.](https://mlanthology.org/cvpr/2014/zhu2014cvpr-complex/) doi:10.1109/CVPR.2014.200

BibTeX

@inproceedings{zhu2014cvpr-complex,
  title     = {{Complex Non-Rigid Motion 3D Reconstruction by Union of Subspaces}},
  author    = {Zhu, Yingying and Huang, Dong and De La Torre, Fernando and Lucey, Simon},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2014},
  doi       = {10.1109/CVPR.2014.200},
  url       = {https://mlanthology.org/cvpr/2014/zhu2014cvpr-complex/}
}