R2-AD2: Detecting Anomalies by Analysing the Raw Gradient

Abstract

Neural networks follow a gradient-based learning scheme, adapting their mapping parameters by back-propagating the output loss. Samples unlike the ones seen during training cause a different gradient distribution. Based on this intuition, we design a novel semi-supervised anomaly detection method called R2-AD2. By analysing the temporal distribution of the gradient over multiple training steps, we reliably detect point anomalies in strict semi-supervised settings. Instead of domain dependent features, we input the raw gradient caused by the sample under test to an end-to-end recurrent neural network architecture. R2-AD2 works in a purely data-driven way, thus is readily applicable in a variety of important use cases of anomaly detection.

Cite

Text

Schulze et al. "R2-AD2: Detecting Anomalies by Analysing the Raw Gradient." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2022. doi:10.1007/978-3-031-26387-3_13

Markdown

[Schulze et al. "R2-AD2: Detecting Anomalies by Analysing the Raw Gradient." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2022.](https://mlanthology.org/ecmlpkdd/2022/schulze2022ecmlpkdd-r2ad2/) doi:10.1007/978-3-031-26387-3_13

BibTeX

@inproceedings{schulze2022ecmlpkdd-r2ad2,
  title     = {{R2-AD2: Detecting Anomalies by Analysing the Raw Gradient}},
  author    = {Schulze, Jan-Philipp and Sperl, Philip and Radutoiu, Ana and Sagebiel, Carla and Böttinger, Konstantin},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2022},
  pages     = {209-224},
  doi       = {10.1007/978-3-031-26387-3_13},
  url       = {https://mlanthology.org/ecmlpkdd/2022/schulze2022ecmlpkdd-r2ad2/}
}