Visual Data Fusion for Objects Localization by Active Vision

Abstract

Visual sensors provide exclusively uncertain and partial knowledge of a scene. In this article, we present a suitable scene knowledge representation that makes integration and fusion of new, uncertain and partial sensor measures possible. It is based on a mixture of stochastic and set membership models. We consider that, for a large class of applications, an approximated representation is sufficient to build a preliminary map of the scene. Our approximation mainly results in ellipsoidal calculus by means of a normal assumption for stochastic laws and ellipsoidal over or inner bounding for uniform laws. These approximations allow us to build an efficient estimation process integrating visual data on line. Based on this estimation scheme, optimal exploratory motions of the camera can be automatically determined. Real time experimental results validating our approach are finally given.

Cite

Text

Flandin and Chaumette. "Visual Data Fusion for Objects Localization by Active Vision." European Conference on Computer Vision, 2002. doi:10.1007/3-540-47979-1_21

Markdown

[Flandin and Chaumette. "Visual Data Fusion for Objects Localization by Active Vision." European Conference on Computer Vision, 2002.](https://mlanthology.org/eccv/2002/flandin2002eccv-visual/) doi:10.1007/3-540-47979-1_21

BibTeX

@inproceedings{flandin2002eccv-visual,
  title     = {{Visual Data Fusion for Objects Localization by Active Vision}},
  author    = {Flandin, Grégory and Chaumette, François},
  booktitle = {European Conference on Computer Vision},
  year      = {2002},
  pages     = {312-326},
  doi       = {10.1007/3-540-47979-1_21},
  url       = {https://mlanthology.org/eccv/2002/flandin2002eccv-visual/}
}