Graphical Models for Recognizing Human Interactions
Abstract
We describe a real-time computer vision and machine learning sys(cid:173) tem for modeling and recognizing human actions and interactions. Two different domains are explored: recognition of two-handed motions in the martial art 'Tai Chi' , and multiple- person interac(cid:173) tions in a visual surveillance task. Our system combines top-down with bottom-up information using a feedback loop, and is formu(cid:173) lated with a Bayesian framework. Two different graphical models (HMMs and Coupled HMMs) are used for modeling both individual actions and multiple-agent interactions, and CHMMs are shown to work more efficiently and accurately for a given amount of train(cid:173) ing. Finally, to overcome the limited amounts of training data, we demonstrate that 'synthetic agents' (Alife-style agents) can be used to develop flexible prior models of the person-to-person inter(cid:173) actions.
Cite
Text
Oliver et al. "Graphical Models for Recognizing Human Interactions." Neural Information Processing Systems, 1998.Markdown
[Oliver et al. "Graphical Models for Recognizing Human Interactions." Neural Information Processing Systems, 1998.](https://mlanthology.org/neurips/1998/oliver1998neurips-graphical/)BibTeX
@inproceedings{oliver1998neurips-graphical,
title = {{Graphical Models for Recognizing Human Interactions}},
author = {Oliver, Nuria and Rosario, Barbara and Pentland, Alex},
booktitle = {Neural Information Processing Systems},
year = {1998},
pages = {924-930},
url = {https://mlanthology.org/neurips/1998/oliver1998neurips-graphical/}
}