GPU Accelerated Left/Right Hand-Segmentation in First Person Vision
Abstract
Wearable cameras allow users to record their daily activities from a user-centered (First Person Vision) perspective. Due to their favourable location, they frequently capture the hands of the user, and may thus represent a promising user-machine interaction tool for different applications. Existent First Person Vision, methods understand the hands as a background/foreground segmentation problem that ignores two important issues: (i) Each pixel is sequentially classified creating a long processing queue, (ii) Hands are not a single “skin-like” moving element but a pair of interacting entities (left-right hand). This paper proposes a GPU-accelerated implementation of a left right-hand segmentation algorithm. The GPU implementation exploits the nature of the pixel-by-pixel classification strategy. The left-right identification is carried out by following a competitive likelihood test based the position and the angle of the segmented pixels.
Cite
Text
Betancourt et al. "GPU Accelerated Left/Right Hand-Segmentation in First Person Vision." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46604-0_36Markdown
[Betancourt et al. "GPU Accelerated Left/Right Hand-Segmentation in First Person Vision." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/betancourt2016eccv-gpu/) doi:10.1007/978-3-319-46604-0_36BibTeX
@inproceedings{betancourt2016eccv-gpu,
title = {{GPU Accelerated Left/Right Hand-Segmentation in First Person Vision}},
author = {Betancourt, Alejandro and Marcenaro, Lucio and Barakova, Emilia I. and Rauterberg, Matthias and Regazzoni, Carlo S.},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {504-517},
doi = {10.1007/978-3-319-46604-0_36},
url = {https://mlanthology.org/eccv/2016/betancourt2016eccv-gpu/}
}