Online Anomaly Detection Under Adversarial Impact
Abstract
Security analysis of learning algorithms is gaining increasing importance, especially since they have become target of deliberate obstruction in certain applications. Some security-hardened algorithms have been previously proposed for supervised learning; however, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution, we analyze the performance of a particular method—online centroid anomaly detection—in the presence of adversarial noise. Our analysis addresses three key security-related issues: derivation of an optimal attack, analysis of its efficiency and constraints. Experimental evaluation carried out on real HTTP and exploit traces confirms the tightness of our theoretical bounds.
Cite
Text
Kloft and Laskov. "Online Anomaly Detection Under Adversarial Impact." Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010.Markdown
[Kloft and Laskov. "Online Anomaly Detection Under Adversarial Impact." Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010.](https://mlanthology.org/aistats/2010/kloft2010aistats-online/)BibTeX
@inproceedings{kloft2010aistats-online,
title = {{Online Anomaly Detection Under Adversarial Impact}},
author = {Kloft, Marius and Laskov, Pavel},
booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics},
year = {2010},
pages = {405-412},
volume = {9},
url = {https://mlanthology.org/aistats/2010/kloft2010aistats-online/}
}