VAMBAM: View and Motion-Based Aspect Models for Distributed Omnidirectional Vision Systems
Abstract
This paper proposes a new model for gesture recognition. The model, called view and motion -based aspect models (VAMBAM), is an omnidirectional view-based aspect model based on motion-based segmentation. This model realizes location-free and rotation-free gesture recognition with a distributed omnidirectional vision system (DOVS). The distributed vision system consisting of multiple omnidirectional cameras is a prototype of a perceptual information infrastructure for monitoring and recognizing the real world. In addition to the concept of VABAM, this paper shows how the model realizes robust and real-time visual recognition of the DOVS.
Cite
Text
Ishiguro and Nishimura. "VAMBAM: View and Motion-Based Aspect Models for Distributed Omnidirectional Vision Systems." International Joint Conference on Artificial Intelligence, 2001.Markdown
[Ishiguro and Nishimura. "VAMBAM: View and Motion-Based Aspect Models for Distributed Omnidirectional Vision Systems." International Joint Conference on Artificial Intelligence, 2001.](https://mlanthology.org/ijcai/2001/ishiguro2001ijcai-vambam/)BibTeX
@inproceedings{ishiguro2001ijcai-vambam,
title = {{VAMBAM: View and Motion-Based Aspect Models for Distributed Omnidirectional Vision Systems}},
author = {Ishiguro, Hiroshi and Nishimura, Takuichi},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2001},
pages = {1375-1380},
url = {https://mlanthology.org/ijcai/2001/ishiguro2001ijcai-vambam/}
}