Djinn: Interaction Framework for Home Environment Using Speech and Vision

Abstract

In this paper we describe an interaction framework that uses speech recognition and computer vision to model new generation of interfaces in the residential environment. We outline the blueprints of the architecture and describe the main building blocks. We show a concrete prototype platform where this novel architecture has been deployed and will be tested at the user field trials. EC co-funds this work as part of HomeTalk IST-2001-33507 project.

Cite

Text

Kleindienst et al. "Djinn: Interaction Framework for Home Environment Using Speech and Vision." European Conference on Computer Vision, 2004. doi:10.1007/978-3-540-24837-8_15

Markdown

[Kleindienst et al. "Djinn: Interaction Framework for Home Environment Using Speech and Vision." European Conference on Computer Vision, 2004.](https://mlanthology.org/eccv/2004/kleindienst2004eccv-djinn/) doi:10.1007/978-3-540-24837-8_15

BibTeX

@inproceedings{kleindienst2004eccv-djinn,
  title     = {{Djinn: Interaction Framework for Home Environment Using Speech and Vision}},
  author    = {Kleindienst, Jan and Macek, Tomás and Serédi, Ladislav and Sedivý, Jan},
  booktitle = {European Conference on Computer Vision},
  year      = {2004},
  pages     = {153-164},
  doi       = {10.1007/978-3-540-24837-8_15},
  url       = {https://mlanthology.org/eccv/2004/kleindienst2004eccv-djinn/}
}