SURFTrac: Efficient Tracking and Continuous Object Recognition Using Local Feature Descriptors

Abstract

We present an efficient algorithm for continuous image recognition and feature descriptor tracking in video which operates by reducing the search space of possible interest points inside of the scale space image pyramid. Instead of performing tracking in 2D images, we search and match candidate features in local neighborhoods inside the 3D image pyramid without computing their feature descriptors. The candidates are further validated by fitting to a motion model. The resulting tracked interest points are more repeatable and resilient to noise, and descriptor computation becomes much more efficient because only those areas of the image pyramid that contain features are searched. We demonstrate our method on real-time object recognition and label augmentation running on a mobile device.

Cite

Text

Ta et al. "SURFTrac: Efficient Tracking and Continuous Object Recognition Using Local Feature Descriptors." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2009. doi:10.1109/CVPR.2009.5206831

Markdown

[Ta et al. "SURFTrac: Efficient Tracking and Continuous Object Recognition Using Local Feature Descriptors." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2009.](https://mlanthology.org/cvpr/2009/ta2009cvpr-surftrac/) doi:10.1109/CVPR.2009.5206831

BibTeX

@inproceedings{ta2009cvpr-surftrac,
  title     = {{SURFTrac: Efficient Tracking and Continuous Object Recognition Using Local Feature Descriptors}},
  author    = {Ta, Duy-Nguyen and Chen, Wei-Chao and Gelfand, Natasha and Pulli, Kari},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2009},
  pages     = {2937-2944},
  doi       = {10.1109/CVPR.2009.5206831},
  url       = {https://mlanthology.org/cvpr/2009/ta2009cvpr-surftrac/}
}