Look at Adjacent Frames: Video Anomaly Detection Without Offline Training

Abstract

We propose a solution to detect anomalous events in videos without the need to train a model offline. Specifically, our solution is based on a randomly-initialized multilayer perceptron that is optimized online to reconstruct video frames, pixel-by-pixel, from their frequency information. Based on the information shifts between adjacent frames, an incremental learner is used to update parameters of the multilayer perceptron after observing each frame, thus allowing to detect anomalous events along the video stream. Traditional solutions that require no offline training are limited to operating on videos with only a few abnormal frames. Our solution breaks this limit and achieves strong performance on benchmark datasets.

Cite

Text

Ouyang et al. "Look at Adjacent Frames: Video Anomaly Detection Without Offline Training." European Conference on Computer Vision Workshops, 2022. doi:10.1007/978-3-031-25072-9_43

Markdown

[Ouyang et al. "Look at Adjacent Frames: Video Anomaly Detection Without Offline Training." European Conference on Computer Vision Workshops, 2022.](https://mlanthology.org/eccvw/2022/ouyang2022eccvw-look/) doi:10.1007/978-3-031-25072-9_43

BibTeX

@inproceedings{ouyang2022eccvw-look,
  title     = {{Look at Adjacent Frames: Video Anomaly Detection Without Offline Training}},
  author    = {Ouyang, Yuqi and Shen, Guodong and Sanchez, Victor},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2022},
  pages     = {642-658},
  doi       = {10.1007/978-3-031-25072-9_43},
  url       = {https://mlanthology.org/eccvw/2022/ouyang2022eccvw-look/}
}