Motion Fused Frames: Data Level Fusion Strategy for Hand Gesture Recognition

Abstract

Acquiring spatio-temporal states of an action is the most crucial step for action classification. In this paper, we propose a data level fusion strategy, Motion Fused Frames (MFFs), designed to fuse motion information into static images as better representatives of spatio-temporal states of an action. MFFs can be used as input to any deep learning architecture with very little modification on the network. We evaluate MFFs on hand gesture recognition tasks using three video datasets - Jester, ChaLearn LAP IsoGD and NVIDIA Dynamic Hand Gesture Datasets - which require capturing long-term temporal relations of hand movements. Our approach obtains very competitive performance on Jester and ChaLearn benchmarks with the classification accuracies of 96.28% and 57.4%, respectively, while achieving state-of-the-art performance with 84.7% accuracy on NVIDIA benchmark.

Cite

Text

Köpüklü et al. "Motion Fused Frames: Data Level Fusion Strategy for Hand Gesture Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018. doi:10.1109/CVPRW.2018.00284

Markdown

[Köpüklü et al. "Motion Fused Frames: Data Level Fusion Strategy for Hand Gesture Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018.](https://mlanthology.org/cvprw/2018/kopuklu2018cvprw-motion/) doi:10.1109/CVPRW.2018.00284

BibTeX

@inproceedings{kopuklu2018cvprw-motion,
  title     = {{Motion Fused Frames: Data Level Fusion Strategy for Hand Gesture Recognition}},
  author    = {Köpüklü, Okan and Kose, Neslihan and Rigoll, Gerhard},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2018},
  pages     = {2103-2111},
  doi       = {10.1109/CVPRW.2018.00284},
  url       = {https://mlanthology.org/cvprw/2018/kopuklu2018cvprw-motion/}
}