Vision-Based Speaker Detection Using Bayesian Networks
Abstract
The development of user interfaces based on vision and speech requires the solution of a challenging statistical inference problem: The intentions and actions of multiple individuals must be inferred from noisy and ambiguous data. We argue that Bayesian network models are an attractive statistical framework for cue fusion in these applications. Bayes nets combine a natural mechanism for expressing contextual information with efficient algorithms for learning and inference. We illustrate these points through the development of a Bayes net model for detecting when a user is speaking. The model combines four simple vision sensors: face detection, skin color, skin texture, and mouth motion. We present some promising experimental results.
Cite
Text
Rehg et al. "Vision-Based Speaker Detection Using Bayesian Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1999. doi:10.1109/CVPR.1999.784617Markdown
[Rehg et al. "Vision-Based Speaker Detection Using Bayesian Networks." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1999.](https://mlanthology.org/cvpr/1999/rehg1999cvpr-vision/) doi:10.1109/CVPR.1999.784617BibTeX
@inproceedings{rehg1999cvpr-vision,
title = {{Vision-Based Speaker Detection Using Bayesian Networks}},
author = {Rehg, James M. and Murphy, Kevin P. and Fieguth, Paul W.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {1999},
pages = {2110-2116},
doi = {10.1109/CVPR.1999.784617},
url = {https://mlanthology.org/cvpr/1999/rehg1999cvpr-vision/}
}