Learning Texton Models for Real-Time Scene Context

Abstract

We present a new model for scene context based on the distribution of textons within images. Our approach provides continuous, consistent scene gist throughout a video sequence and is suitable for applications in which the camera regularly views uninformative parts of the scene. We show that our model outperforms the state-of-the-art for place recognition. We further show how to deduce the camera orientation from our scene gist and finally show how our system can be applied to active object search.

Cite

Text

Flint et al. "Learning Texton Models for Real-Time Scene Context." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2009. doi:10.1109/CVPRW.2009.5204356

Markdown

[Flint et al. "Learning Texton Models for Real-Time Scene Context." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2009.](https://mlanthology.org/cvprw/2009/flint2009cvprw-learning/) doi:10.1109/CVPRW.2009.5204356

BibTeX

@inproceedings{flint2009cvprw-learning,
  title     = {{Learning Texton Models for Real-Time Scene Context}},
  author    = {Flint, Alex and Reid, Ian D. and Murray, David William},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2009},
  pages     = {41-48},
  doi       = {10.1109/CVPRW.2009.5204356},
  url       = {https://mlanthology.org/cvprw/2009/flint2009cvprw-learning/}
}