End-to-End People Detection in Crowded Scenes

Abstract

Current people detectors operate either by scanning an image in a sliding window fashion or by classifying a discrete set of proposals. We propose a model that is based on decoding an image into a set of people detections. Our system takes an image as input and directly outputs a set of distinct detection hypotheses. Because we generate predictions jointly, common post-processing steps such as non-maximum suppression are unnecessary. We use a recurrent LSTM layer for sequence generation and train our model end-to-end with a new loss function that operates on sets of detections. We demonstrate the effectiveness of our approach on the challenging task of detecting people in crowded scenes

Cite

Text

Stewart et al. "End-to-End People Detection in Crowded Scenes." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.255

Markdown

[Stewart et al. "End-to-End People Detection in Crowded Scenes." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/stewart2016cvpr-endtoend/) doi:10.1109/CVPR.2016.255

BibTeX

@inproceedings{stewart2016cvpr-endtoend,
  title     = {{End-to-End People Detection in Crowded Scenes}},
  author    = {Stewart, Russell and Andriluka, Mykhaylo and Ng, Andrew Y.},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2016},
  doi       = {10.1109/CVPR.2016.255},
  url       = {https://mlanthology.org/cvpr/2016/stewart2016cvpr-endtoend/}
}