Inertial Sensor-Aligned Visual Feature Descriptors
Abstract
We propose to align the orientation of local feature descriptors with the gravitational force measured with inertial sensors. In contrast to standard approaches that gain a reproducible feature orientation from the intensities of neighboring pixels to remain invariant against rotation, this approach results in clearly distinguishable descriptors for congruent features in different orientations. Gravity-aligned feature descriptors (GAFD) are suitable for any application relying on corresponding points in multiple images of static scenes and are particularly beneficial in the presence of differently oriented repetitive features as they are widespread in urban scenes and on man-made objects. In this paper, we show with different examples that the process of feature description and matching gets both faster and results in better matches when aligning the descriptors with the gravity compared to traditional techniques.
Cite
Text
Kurz and Himane. "Inertial Sensor-Aligned Visual Feature Descriptors." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011. doi:10.1109/CVPR.2011.5995339Markdown
[Kurz and Himane. "Inertial Sensor-Aligned Visual Feature Descriptors." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2011.](https://mlanthology.org/cvpr/2011/kurz2011cvpr-inertial/) doi:10.1109/CVPR.2011.5995339BibTeX
@inproceedings{kurz2011cvpr-inertial,
title = {{Inertial Sensor-Aligned Visual Feature Descriptors}},
author = {Kurz, Daniel and Himane, Selim Ben},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2011},
pages = {161-166},
doi = {10.1109/CVPR.2011.5995339},
url = {https://mlanthology.org/cvpr/2011/kurz2011cvpr-inertial/}
}