Learning Event-Relevant Factors for Video Anomaly Detection

Abstract

Most video anomaly detection methods discriminate events that deviate from normal patterns as anomalies. However, these methods are prone to interferences from event-irrelevant factors, such as background textures and object scale variations, incurring an increased false detection rate. In this paper, we propose to explicitly learn event-relevant factors to eliminate the interferences from event-irrelevant factors on anomaly predictions. To this end, we introduce a causal generative model to separate the event-relevant factors and event-irrelevant ones in videos, and learn the prototypes of event-relevant factors in a memory augmentation module. We design a causal objective function to optimize the causal generative model and develop a counterfactual learning strategy to guide anomaly predictions, which increases the influence of the event-relevant factors. The extensive experiments show the effectiveness of our method for video anomaly detection.

Cite

Text

Sun et al. "Learning Event-Relevant Factors for Video Anomaly Detection." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I2.25334

Markdown

[Sun et al. "Learning Event-Relevant Factors for Video Anomaly Detection." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/sun2023aaai-learning/) doi:10.1609/AAAI.V37I2.25334

BibTeX

@inproceedings{sun2023aaai-learning,
  title     = {{Learning Event-Relevant Factors for Video Anomaly Detection}},
  author    = {Sun, Che and Shi, Chenrui and Jia, Yunde and Wu, Yuwei},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {2384-2392},
  doi       = {10.1609/AAAI.V37I2.25334},
  url       = {https://mlanthology.org/aaai/2023/sun2023aaai-learning/}
}