Perspectively Invariant Normal Features

Abstract

We extend the successful 2D robust feature concept into the third dimension in that we produce a descriptor for a reconstructed 3D surface region. The descriptor is perspectively invariant if the region can locally be approximated well by a plane. We exploit depth and texture information, which is nowadays available in real-time from video of moving cameras, from stereo systems or PMD cameras (photonic mixer devices). By computing a normal view onto the surface we still keep the descriptiveness of similarity invariant features like SIFT while achieving in- variance against perspective distortions, while descriptiveness typically suffers when using affine invariant features. Our approach can be exploited for structure-from-motion, for stereo or PMD cameras, alignment of large scale reconstructions or improved video registration.

Cite

Text

Köser and Koch. "Perspectively Invariant Normal Features." IEEE/CVF International Conference on Computer Vision, 2007. doi:10.1109/ICCV.2007.4408837

Markdown

[Köser and Koch. "Perspectively Invariant Normal Features." IEEE/CVF International Conference on Computer Vision, 2007.](https://mlanthology.org/iccv/2007/koser2007iccv-perspectively/) doi:10.1109/ICCV.2007.4408837

BibTeX

@inproceedings{koser2007iccv-perspectively,
  title     = {{Perspectively Invariant Normal Features}},
  author    = {Köser, Kevin and Koch, Reinhard},
  booktitle = {IEEE/CVF International Conference on Computer Vision},
  year      = {2007},
  pages     = {1-8},
  doi       = {10.1109/ICCV.2007.4408837},
  url       = {https://mlanthology.org/iccv/2007/koser2007iccv-perspectively/}
}