High Level Activity Recognition Using Low Resolution Wearable Vision
Abstract
This paper presents a system aimed to serve as the enabling platform for a wearable assistant. The method observes manipulations from a wearable camera and classifies activities from roughly stabilized low resolution images (160×120 pixels) with the help of a 3-level Dynamic Bayesian Network and adapted temporal templates. Our motivation is to explore robust but computationally inexpensive visual methods to perform as much activity inference as possible without resorting to more complex object or hand detectors. The description of the method and results obtained are presented, as well as the motivation for further work in the area of wearable visual sensing.
Cite
Text
Sundaram and Mayol-Cuevas. "High Level Activity Recognition Using Low Resolution Wearable Vision." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2009. doi:10.1109/CVPRW.2009.5204355Markdown
[Sundaram and Mayol-Cuevas. "High Level Activity Recognition Using Low Resolution Wearable Vision." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2009.](https://mlanthology.org/cvprw/2009/sundaram2009cvprw-high/) doi:10.1109/CVPRW.2009.5204355BibTeX
@inproceedings{sundaram2009cvprw-high,
title = {{High Level Activity Recognition Using Low Resolution Wearable Vision}},
author = {Sundaram, Sudeep and Mayol-Cuevas, Walterio W.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2009},
pages = {25-32},
doi = {10.1109/CVPRW.2009.5204355},
url = {https://mlanthology.org/cvprw/2009/sundaram2009cvprw-high/}
}