Gesture + Play Exploring Full-Body Navigation for Virtual Environments
Abstract
Navigating virtual environments usually requires a wired interface, game console, or keyboard. The advent of perceptual interface techniques allows a new option: the passive and untethered sensing of users' pose and gesture to allow them maneuver through and manipulate virtual worlds. We describe new algorithms for interacting with 3-D environments using real-time articulated body tracking with standard cameras and personal computers. Our method is based on rigid stereo-motion estimation algorithms and uses a linear technique for enforcing articulation constraints. With our tracking system users can navigate virtual environments using 3-D gesture and body poses. We analyze the space of possible perceptual interface abstractions for full-body navigation, and present a prototype system based on these results. We finally describe an initial evaluation of our prototype system with users guiding avatars through a series of 3-D virtual game worlds.
Cite
Text
Tollmar et al. "Gesture + Play Exploring Full-Body Navigation for Virtual Environments." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2003. doi:10.1109/CVPRW.2003.10046Markdown
[Tollmar et al. "Gesture + Play Exploring Full-Body Navigation for Virtual Environments." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2003.](https://mlanthology.org/cvprw/2003/tollmar2003cvprw-gesture/) doi:10.1109/CVPRW.2003.10046BibTeX
@inproceedings{tollmar2003cvprw-gesture,
title = {{Gesture + Play Exploring Full-Body Navigation for Virtual Environments}},
author = {Tollmar, Konrad and Demirdjian, David and Darrell, Trevor},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2003},
pages = {47},
doi = {10.1109/CVPRW.2003.10046},
url = {https://mlanthology.org/cvprw/2003/tollmar2003cvprw-gesture/}
}