People Counting in Videos by Fusing Temporal Cues from Spatial Context-Aware Convolutional Neural Networks
Abstract
We present an efficient method for people counting in video sequences from fixed cameras by utilising the responses of spatially context-aware convolutional neural networks (CNN) in the temporal domain. For stationary cameras, the background information remains fairly static, while foreground characteristics, such as size and orientation may depend on their image location, thus the use of whole frames for training a CNN improves the differentiation between background and foreground pixels. Foreground density representing the presence of people in the environment can then be associated with people counts. Moreover the fusion, of the responses of count estimations, in the temporal domain, can further enhance the accuracy of the final count. Our methodology was tested using the publicly available Mall dataset and achieved a mean deviation error of 0.091.
Cite
Text
Sourtzinos et al. "People Counting in Videos by Fusing Temporal Cues from Spatial Context-Aware Convolutional Neural Networks." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-48881-3_46Markdown
[Sourtzinos et al. "People Counting in Videos by Fusing Temporal Cues from Spatial Context-Aware Convolutional Neural Networks." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/sourtzinos2016eccv-people/) doi:10.1007/978-3-319-48881-3_46BibTeX
@inproceedings{sourtzinos2016eccv-people,
title = {{People Counting in Videos by Fusing Temporal Cues from Spatial Context-Aware Convolutional Neural Networks}},
author = {Sourtzinos, Panos and Velastin, Sergio A. and Jara, Miguel and Zegers, Pablo and Makris, Dimitrios},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {655-667},
doi = {10.1007/978-3-319-48881-3_46},
url = {https://mlanthology.org/eccv/2016/sourtzinos2016eccv-people/}
}