Robust Feature Matching in 2.3µs
Abstract
In this paper we present a robust feature matching scheme in which features can be matched in 2.3 μs. For a typical task involving 150 features per image, this results in a processing time of 500 μs for feature extraction and matching. In order to achieve very fast matching we use simple features based on histograms of pixel intensities and an indexing scheme based on their joint distribution. The features are stored with a novel bit mask representation which requires only 44 bytes of memory per feature and allows computation of a dissimilarity score in 20 ns. A training phase gives the patch-based features invariance to small viewpoint variations. Larger viewpoint variations are handled by training entirely independent sets of features from different viewpoints. A complete system is presented where a database of around 13,000 features is used to robustly localise a single planar target in just over a millisecond, including all steps from feature detection to model fitting. The resulting system shows comparable robustness to SIFT and Ferns while using a tiny fraction of the processing time, and in the latter case a fraction of the memory as well.
Cite
Text
Taylor et al. "Robust Feature Matching in 2.3µs." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2009. doi:10.1109/CVPRW.2009.5204314Markdown
[Taylor et al. "Robust Feature Matching in 2.3µs." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2009.](https://mlanthology.org/cvprw/2009/taylor2009cvprw-robust/) doi:10.1109/CVPRW.2009.5204314BibTeX
@inproceedings{taylor2009cvprw-robust,
title = {{Robust Feature Matching in 2.3µs}},
author = {Taylor, Simon and Rosten, Edward and Drummond, Tom},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2009},
pages = {15-22},
doi = {10.1109/CVPRW.2009.5204314},
url = {https://mlanthology.org/cvprw/2009/taylor2009cvprw-robust/}
}