Bayesian Networks for Speech and Image Integration

Abstract

The realization of natural human-computer interfaces suffers from a wide range of restrictions concerning noisy data, vague meanings, and context dependence. An essential aspect of everyday communication is the ability of humans to ground verbal interpretations in visual perception. Thus, the system has to be able to solve the correspondence problem of relating verbal and visual descriptions of the same object. This contribution proposes a new and innovative solution to this problem using Bayesian networks. In order to capture vague meanings of adjectives used by the speaker, psycholinguistic experiments are evaluated. Object recognition errors are taken into account by conditional probabilities estimated on test sets. The Bayesian network is dynamically built up from verbal object description and is evaluated by an inference technique combining bucket elimination and conditioning. Results show that speech and image data is interpreted more robustly in the combined case than in the case of isolated interpretations.

Cite

Text

Wachsmuth and Sagerer. "Bayesian Networks for Speech and Image Integration." AAAI Conference on Artificial Intelligence, 2002. doi:10.5555/777092.777141

Markdown

[Wachsmuth and Sagerer. "Bayesian Networks for Speech and Image Integration." AAAI Conference on Artificial Intelligence, 2002.](https://mlanthology.org/aaai/2002/wachsmuth2002aaai-bayesian/) doi:10.5555/777092.777141

BibTeX

@inproceedings{wachsmuth2002aaai-bayesian,
  title     = {{Bayesian Networks for Speech and Image Integration}},
  author    = {Wachsmuth, Sven and Sagerer, Gerhard},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2002},
  pages     = {300-306},
  doi       = {10.5555/777092.777141},
  url       = {https://mlanthology.org/aaai/2002/wachsmuth2002aaai-bayesian/}
}