CrowdCLIP: Unsupervised Crowd Counting via Vision-Language Model

Abstract

Supervised crowd counting relies heavily on costly manual labeling, which is difficult and expensive, especially in dense scenes. To alleviate the problem, we propose a novel unsupervised framework for crowd counting, named CrowdCLIP. The core idea is built on two observations: 1) the recent contrastive pre-trained vision-language model (CLIP) has presented impressive performance on various downstream tasks; 2) there is a natural mapping between crowd patches and count text. To the best of our knowledge, CrowdCLIP is the first to investigate the vision-language knowledge to solve the counting problem. Specifically, in the training stage, we exploit the multi-modal ranking loss by constructing ranking text prompts to match the size-sorted crowd patches to guide the image encoder learning. In the testing stage, to deal with the diversity of image patches, we propose a simple yet effective progressive filtering strategy to first select the highly potential crowd patches and then map them into the language space with various counting intervals. Extensive experiments on five challenging datasets demonstrate that the proposed CrowdCLIP achieves superior performance compared to previous unsupervised state-of-the-art counting methods. Notably, CrowdCLIP even surpasses some popular fully-supervised methods under the cross-dataset setting. The source code will be available at https://github.com/dk-liang/CrowdCLIP.

Cite

Text

Liang et al. "CrowdCLIP: Unsupervised Crowd Counting via Vision-Language Model." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00283

Markdown

[Liang et al. "CrowdCLIP: Unsupervised Crowd Counting via Vision-Language Model." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/liang2023cvpr-crowdclip/) doi:10.1109/CVPR52729.2023.00283

BibTeX

@inproceedings{liang2023cvpr-crowdclip,
  title     = {{CrowdCLIP: Unsupervised Crowd Counting via Vision-Language Model}},
  author    = {Liang, Dingkang and Xie, Jiahao and Zou, Zhikang and Ye, Xiaoqing and Xu, Wei and Bai, Xiang},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {2893-2903},
  doi       = {10.1109/CVPR52729.2023.00283},
  url       = {https://mlanthology.org/cvpr/2023/liang2023cvpr-crowdclip/}
}