Social Scene Understanding: End-to-End Multi-Person Action Localization and Collective Activity Recognition
Abstract
We present a unified framework for understanding human social behaviors in raw image sequences. Our model jointly detects multiple individuals, infers their social actions, and estimates the collective actions with a single feed-forward pass through a neural network. We propose a single architecture that does not rely on external detection algorithms but rather is trained end-to-end to generate dense proposal maps that are refined via a novel inference scheme. The temporal consistency is handled via a person-level matching Recurrent Neural Network. The complete model takes as input a sequence of frames and outputs detections along with the estimates of individual actions and collective activities. We demonstrate state-of-the-art performance of our algorithm on multiple publicly available benchmarks.
Cite
Text
Bagautdinov et al. "Social Scene Understanding: End-to-End Multi-Person Action Localization and Collective Activity Recognition." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.365Markdown
[Bagautdinov et al. "Social Scene Understanding: End-to-End Multi-Person Action Localization and Collective Activity Recognition." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/bagautdinov2017cvpr-social/) doi:10.1109/CVPR.2017.365BibTeX
@inproceedings{bagautdinov2017cvpr-social,
title = {{Social Scene Understanding: End-to-End Multi-Person Action Localization and Collective Activity Recognition}},
author = {Bagautdinov, Timur and Alahi, Alexandre and Fleuret, Francois and Fua, Pascal and Savarese, Silvio},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2017},
doi = {10.1109/CVPR.2017.365},
url = {https://mlanthology.org/cvpr/2017/bagautdinov2017cvpr-social/}
}