VadCLIP: Adapting Vision-Language Models for Weakly Supervised Video Anomaly Detection

Abstract

The recent contrastive language-image pre-training (CLIP) model has shown great success in a wide range of image-level tasks, revealing remarkable ability for learning powerful visual representations with rich semantics. An open and worthwhile problem is efficiently adapting such a strong model to the video domain and designing a robust video anomaly detector. In this work, we propose VadCLIP, a new paradigm for weakly supervised video anomaly detection (WSVAD) by leveraging the frozen CLIP model directly without any pre-training and fine-tuning process. Unlike current works that directly feed extracted features into the weakly supervised classifier for frame-level binary classification, VadCLIP makes full use of fine-grained associations between vision and language on the strength of CLIP and involves dual branch. One branch simply utilizes visual features for coarse-grained binary classification, while the other fully leverages the fine-grained language-image alignment. With the benefit of dual branch, VadCLIP achieves both coarse-grained and fine-grained video anomaly detection by transferring pre-trained knowledge from CLIP to WSVAD task. We conduct extensive experiments on two commonly-used benchmarks, demonstrating that VadCLIP achieves the best performance on both coarse-grained and fine-grained WSVAD, surpassing the state-of-the-art methods by a large margin. Specifically, VadCLIP achieves 84.51% AP and 88.02% AUC on XD-Violence and UCF-Crime, respectively. Code and features are released at https://github.com/nwpu-zxr/VadCLIP.

Cite

Text

Wu et al. "VadCLIP: Adapting Vision-Language Models for Weakly Supervised Video Anomaly Detection." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I6.28423

Markdown

[Wu et al. "VadCLIP: Adapting Vision-Language Models for Weakly Supervised Video Anomaly Detection." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/wu2024aaai-vadclip/) doi:10.1609/AAAI.V38I6.28423

BibTeX

@inproceedings{wu2024aaai-vadclip,
  title     = {{VadCLIP: Adapting Vision-Language Models for Weakly Supervised Video Anomaly Detection}},
  author    = {Wu, Peng and Zhou, Xuerong and Pang, Guansong and Zhou, Lingru and Yan, Qingsen and Wang, Peng and Zhang, Yanning},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {6074-6082},
  doi       = {10.1609/AAAI.V38I6.28423},
  url       = {https://mlanthology.org/aaai/2024/wu2024aaai-vadclip/}
}