A Graphical Model for Unifying Tracking and Classification Within a Multimodal Human-Robot Interaction Scenario
Abstract
This paper introduces our research platform for enabling a multimodal Human-Robot Interaction scenario as well as our research vision: approaching problems in a holistic way to realize this scenario. However, in this paper the main focus is laid on the image processing domain, where our vision has been realized by combining particle tracking and Dynamic Bayesian Network classification in a unified Graphical Model. This combination allows for enhancing the tracking process by an adaptive motion model realized via a Dynamic Bayesian Network modeling several motion classes. The Graphical Model provides a direct integration of the classification step in the tracking process. First promising results show the potential of the approach.
Cite
Text
Rehrl et al. "A Graphical Model for Unifying Tracking and Classification Within a Multimodal Human-Robot Interaction Scenario." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2010. doi:10.1109/CVPRW.2010.5543751Markdown
[Rehrl et al. "A Graphical Model for Unifying Tracking and Classification Within a Multimodal Human-Robot Interaction Scenario." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2010.](https://mlanthology.org/cvprw/2010/rehrl2010cvprw-graphical/) doi:10.1109/CVPRW.2010.5543751BibTeX
@inproceedings{rehrl2010cvprw-graphical,
title = {{A Graphical Model for Unifying Tracking and Classification Within a Multimodal Human-Robot Interaction Scenario}},
author = {Rehrl, Tobias and Gast, Jürgen and Theißing, Nikolaus and Bannat, Alexander and Arsic, Dejan and Wallhoff, Frank and Rigoll, Gerhard and Mayer, Christoph and Radig, Bernd},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2010},
pages = {17-23},
doi = {10.1109/CVPRW.2010.5543751},
url = {https://mlanthology.org/cvprw/2010/rehrl2010cvprw-graphical/}
}