Which Shape from Motion?

Abstract

In a practical situation, the rigid transformation relating different views is recovered with errors. In such a case, the recovered depth of the scene contains errors, and consequently a distorted version of visual space is computed. What then are meaningful shape representations that can be computed from the images? The result presented in this paper states that if the rigid transformation between different views is estimated in a way that gives rise to a minimum number of negative depth values, then at the center of the image affine shape can be correctly computed. This result is obtained by exploiting properties of the distortion function. The distortion model turns out to be a very powerful tool in the analysis and design of 3D motion and shape estimation algorithms, and as a byproduct of our analysis we present a computational explanation of psychophysical results demonstrating human visual space distortion from motion information.

Cite

Text

Fermüller and Aloimonos. "Which Shape from Motion?." IEEE/CVF International Conference on Computer Vision, 1998. doi:10.1109/ICCV.1998.710792

Markdown

[Fermüller and Aloimonos. "Which Shape from Motion?." IEEE/CVF International Conference on Computer Vision, 1998.](https://mlanthology.org/iccv/1998/fermuller1998iccv-shape/) doi:10.1109/ICCV.1998.710792

BibTeX

@inproceedings{fermuller1998iccv-shape,
  title     = {{Which Shape from Motion?}},
  author    = {Fermüller, Cornelia and Aloimonos, Yiannis},
  booktitle = {IEEE/CVF International Conference on Computer Vision},
  year      = {1998},
  pages     = {689-695},
  doi       = {10.1109/ICCV.1998.710792},
  url       = {https://mlanthology.org/iccv/1998/fermuller1998iccv-shape/}
}