CADET: Calibrated Anomaly Detection for Mitigating Hardness Bias
Abstract
The detection of anomalous samples in large, high-dimensional datasets is a challenging task with numerous practical applications. Recently, state-of-the-art performance is achieved with deep learning methods: for example, using the reconstruction error from an autoencoder as anomaly scores. However, the scores are uncalibrated: that is, they follow an unknown distribution and lack a clear interpretation. Furthermore, the reconstruction error is highly influenced by the `hardness' of a given sample, which leads to false negative and false positive errors. In this paper, we empirically show the significance of this hardness bias present in a range of recent deep anomaly detection methods. To mitigate this, we propose an efficient and plug-and-play error calibration method which mitigates this hardness bias in the anomaly scoring without the need to retrain the model. We verify the effectiveness of our method on a range of image, time-series, and tabular datasets and against several baseline methods.
Cite
Text
Deng et al. "CADET: Calibrated Anomaly Detection for Mitigating Hardness Bias." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/278Markdown
[Deng et al. "CADET: Calibrated Anomaly Detection for Mitigating Hardness Bias." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/deng2022ijcai-cadet/) doi:10.24963/IJCAI.2022/278BibTeX
@inproceedings{deng2022ijcai-cadet,
title = {{CADET: Calibrated Anomaly Detection for Mitigating Hardness Bias}},
author = {Deng, Ailin and Goodge, Adam and Ang, Lang Yi and Hooi, Bryan},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2022},
pages = {2002-2008},
doi = {10.24963/IJCAI.2022/278},
url = {https://mlanthology.org/ijcai/2022/deng2022ijcai-cadet/}
}