Recurrent Modeling of Interaction Context for Collective Activity Recognition
Abstract
Modeling of high order interactional context, e.g., group interaction, lies in the central of collective/group activity recognition. However, most of the previous activity recognition methods do not offer a flexible and scalable scheme to handle the high order context modeling problem. To explicitly address this fundamental bottleneck, we propose a recurrent interactional context modeling scheme based on LSTM network. By utilizing the information propagation/aggregation capability of LSTM, the proposed scheme unifies the interactional feature modeling process for single person dynamics, intra-group (e.g., persons within a group) and inter-group(e.g., group to group)interactions. The proposed high order context modeling scheme produces more discriminative/descriptive interactional features. It is very flexible to handle a varying number of input instances (e.g., different number of persons in a group or different number of groups) and linearly scalable to high order context modeling problem. Extensive experiments on two benchmark collective/group activity datasets demonstrate the effectiveness of the proposed method.
Cite
Text
Wang et al. "Recurrent Modeling of Interaction Context for Collective Activity Recognition." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.783Markdown
[Wang et al. "Recurrent Modeling of Interaction Context for Collective Activity Recognition." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/wang2017cvpr-recurrent/) doi:10.1109/CVPR.2017.783BibTeX
@inproceedings{wang2017cvpr-recurrent,
title = {{Recurrent Modeling of Interaction Context for Collective Activity Recognition}},
author = {Wang, Minsi and Ni, Bingbing and Yang, Xiaokang},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2017},
doi = {10.1109/CVPR.2017.783},
url = {https://mlanthology.org/cvpr/2017/wang2017cvpr-recurrent/}
}