Extending Interrupted Feature Point Tracking for 3-D Affine Reconstruction

Abstract

Feature point tracking over a video sequence fails when the points go out of the field of view or behind other objects. In this paper, we extend such interrupted tracking by imposing the constraint that under the affine camera model all feature trajectories should be in an affine space. Our method consists of iterations for optimally extending the trajectories and for optimally estimating the affine space, coupled with an outlier removal process. Using real video images, we demonstrate that our method can restore a sufficient number of trajectories for detailed 3-D reconstruction.

Cite

Text

Sugaya and Kanatani. "Extending Interrupted Feature Point Tracking for 3-D Affine Reconstruction." European Conference on Computer Vision, 2004. doi:10.1007/978-3-540-24670-1_24

Markdown

[Sugaya and Kanatani. "Extending Interrupted Feature Point Tracking for 3-D Affine Reconstruction." European Conference on Computer Vision, 2004.](https://mlanthology.org/eccv/2004/sugaya2004eccv-extending/) doi:10.1007/978-3-540-24670-1_24

BibTeX

@inproceedings{sugaya2004eccv-extending,
  title     = {{Extending Interrupted Feature Point Tracking for 3-D Affine Reconstruction}},
  author    = {Sugaya, Yasuyuki and Kanatani, Ken-ichi},
  booktitle = {European Conference on Computer Vision},
  year      = {2004},
  pages     = {310-321},
  doi       = {10.1007/978-3-540-24670-1_24},
  url       = {https://mlanthology.org/eccv/2004/sugaya2004eccv-extending/}
}