Large-Scale Multimodal Gesture Recognition Using Heterogeneous Networks

Abstract

This paper presents the method designed for the 2017 ChaLearn LAP Large-scale Gesture Recognition Challenge. The proposed method converts a video sequence into multiple body level dynamic images and hand level dynamic images as the inputs to Convolutional Neural Networks (ConvNets) respectively through bidirectional rank pooling and adopts Convolutional LSTM Networks (Con-vLSTM) to learn long-term spatiotemporal features from short-term spatiotemporal features extracted using a 3D convolutional neural network (3DCNN) at body and hand level. Such a heterogeneous network system learns effectively different levels of spatiotemporal features that are complementary to each other to improve the recognition accuracy largely. The method has been evaluated on the 2017 isolated and continuous ChaLearn LAP Large-scale Gesture Recognition Challenge datasets and the results are ranked among the top performances.

Cite

Text

Wang et al. "Large-Scale Multimodal Gesture Recognition Using Heterogeneous Networks." IEEE/CVF International Conference on Computer Vision Workshops, 2017. doi:10.1109/ICCVW.2017.370

Markdown

[Wang et al. "Large-Scale Multimodal Gesture Recognition Using Heterogeneous Networks." IEEE/CVF International Conference on Computer Vision Workshops, 2017.](https://mlanthology.org/iccvw/2017/wang2017iccvw-largescale/) doi:10.1109/ICCVW.2017.370

BibTeX

@inproceedings{wang2017iccvw-largescale,
  title     = {{Large-Scale Multimodal Gesture Recognition Using Heterogeneous Networks}},
  author    = {Wang, Huogen and Wang, Pichao and Song, Zhanjie and Li, Wanqing},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2017},
  pages     = {3129-3137},
  doi       = {10.1109/ICCVW.2017.370},
  url       = {https://mlanthology.org/iccvw/2017/wang2017iccvw-largescale/}
}