Fusion of Visual and Ultrasonic Information for Environmental Modelling
Abstract
Information obtained from calibrated cameras by means of computer vision is integrated with location events from an ultrasonic tracking system deployed in an indoor office. This results in improved estimates of state and location which are used to augment the environmental model maintained by a sentient computing system. Fusion of the different sources of information takes place at a high level using Bayesian networks to model dependencies and reliabilities of the multi-modal variables. Context is represented using a world model of both the static and dynamic environment. The world model serves both as an ontology of prior information for multi-modal integration and as a source of context for applications.
Cite
Text
Town. "Fusion of Visual and Ultrasonic Information for Environmental Modelling." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2004. doi:10.1109/CVPR.2004.352Markdown
[Town. "Fusion of Visual and Ultrasonic Information for Environmental Modelling." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2004.](https://mlanthology.org/cvpr/2004/town2004cvpr-fusion/) doi:10.1109/CVPR.2004.352BibTeX
@inproceedings{town2004cvpr-fusion,
title = {{Fusion of Visual and Ultrasonic Information for Environmental Modelling}},
author = {Town, Christopher},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2004},
pages = {124},
doi = {10.1109/CVPR.2004.352},
url = {https://mlanthology.org/cvpr/2004/town2004cvpr-fusion/}
}