Constricting Normal Latent Space for Anomaly Detection with Normal-Only Training Data

Abstract

In order to devise an anomaly detection model using only normal training data, an autoencoder (AE) is typically trained to reconstruct the data. As a result, the AE can extract normal representations in its latent space. During test time, since AE is not trained using real anomalies, it is expected to poorly reconstruct the anomalous data. However, several researchers have observed that it is not the case. In this work, we propose to limit the reconstruction capability of AE by introducing a novel latent constriction loss, which is added to the existing reconstruction loss. By using our method, no extra computational cost is added to the AE during test time. Evaluations using three video anomaly detection benchmark datasets, i.e., Ped2, Avenue, and ShanghaiTech, demonstrate the effectiveness of our method in limiting the reconstruction capability of AE, which leads to a better anomaly detection model.

Cite

Text

Astrid et al. "Constricting Normal Latent Space for Anomaly Detection with Normal-Only Training Data." ICLR 2024 Workshops: PML4LRS, 2024.

Markdown

[Astrid et al. "Constricting Normal Latent Space for Anomaly Detection with Normal-Only Training Data." ICLR 2024 Workshops: PML4LRS, 2024.](https://mlanthology.org/iclrw/2024/astrid2024iclrw-constricting/)

BibTeX

@inproceedings{astrid2024iclrw-constricting,
  title     = {{Constricting Normal Latent Space for Anomaly Detection with Normal-Only Training Data}},
  author    = {Astrid, Marcella and Zaheer, Muhammad Zaigham and Lee, Seung-Ik},
  booktitle = {ICLR 2024 Workshops: PML4LRS},
  year      = {2024},
  url       = {https://mlanthology.org/iclrw/2024/astrid2024iclrw-constricting/}
}