Efficient Top Rank Optimization with Gradient Boosting for Supervised Anomaly Detection
Abstract
In this paper we address the anomaly detection problem in a supervised setting where positive examples might be very sparse. We tackle this task with a learning to rank strategy by optimizing a differentiable smoothed surrogate of the so-called Average Precision (AP). Despite its non-convexity, we show how to use it efficiently in a stochastic gradient boosting framework. We show that using AP is much better to optimize the top rank alerts than the state of the art measures. We demonstrate on anomaly detection tasks that the interest of our method is even reinforced in highly unbalanced scenarios.
Cite
Text
Fréry et al. "Efficient Top Rank Optimization with Gradient Boosting for Supervised Anomaly Detection." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2017. doi:10.1007/978-3-319-71249-9_2Markdown
[Fréry et al. "Efficient Top Rank Optimization with Gradient Boosting for Supervised Anomaly Detection." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2017.](https://mlanthology.org/ecmlpkdd/2017/frery2017ecmlpkdd-efficient/) doi:10.1007/978-3-319-71249-9_2BibTeX
@inproceedings{frery2017ecmlpkdd-efficient,
title = {{Efficient Top Rank Optimization with Gradient Boosting for Supervised Anomaly Detection}},
author = {Fréry, Jordan and Habrard, Amaury and Sebban, Marc and Caelen, Olivier and He-Guelton, Liyun},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2017},
pages = {20-35},
doi = {10.1007/978-3-319-71249-9_2},
url = {https://mlanthology.org/ecmlpkdd/2017/frery2017ecmlpkdd-efficient/}
}