Completely Self-Supervised Crowd Counting via Distribution Matching

Abstract

Dense crowd counting is a challenging task that demands millions of head annotations for training models. Though existing self-supervised approaches could learn good representations, they require some labeled data to map these features to the end task of density estimation. We mitigate this issue with the proposed paradigm of complete self-supervision, which does not need even a single labeled image. The only input required to train, apart from a large set of unlabeled crowd images, is the approximate upper limit of the crowd count for the given dataset. Our method dwells on the idea that natural crowds follow a power law distribution, which could be leveraged to yield error signals for backpropagation. A density regressor is first pretrained with self-supervision and then the distribution of predictions is matched to the prior. Experiments show that this results in effective learning of crowd features and delivers significant counting performance.

Cite

Text

Sam et al. "Completely Self-Supervised Crowd Counting via Distribution Matching." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19821-2_11

Markdown

[Sam et al. "Completely Self-Supervised Crowd Counting via Distribution Matching." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/sam2022eccv-completely/) doi:10.1007/978-3-031-19821-2_11

BibTeX

@inproceedings{sam2022eccv-completely,
  title     = {{Completely Self-Supervised Crowd Counting via Distribution Matching}},
  author    = {Sam, Deepak Babu and Agarwalla, Abhinav and Joseph, Jimmy and Sindagi, Vishwanath A. and Babu, R. Venkatesh and Patel, Vishal M.},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-19821-2_11},
  url       = {https://mlanthology.org/eccv/2022/sam2022eccv-completely/}
}