Near-Optimal Evasion of Convex-Inducing Classifiers

Abstract

Classifiers are often used to detect miscreant activities. We study how an adversary can efficiently query a classifier to elicit information that allows the adversary to evade detection at near-minimal cost. We generalize results of Lowd and Meek (2005) to convex-inducing classifiers. We present algorithms that construct undetected instances of near-minimal cost using only polynomially many queries in the dimension of the space and without reverse engineering the decision boundary.

Cite

Text

Nelson et al. "Near-Optimal Evasion of Convex-Inducing Classifiers." Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010.

Markdown

[Nelson et al. "Near-Optimal Evasion of Convex-Inducing Classifiers." Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010.](https://mlanthology.org/aistats/2010/nelson2010aistats-nearoptimal/)

BibTeX

@inproceedings{nelson2010aistats-nearoptimal,
  title     = {{Near-Optimal Evasion of Convex-Inducing Classifiers}},
  author    = {Nelson, Blaine and Rubinstein, Benjamin and Huang, Ling and Joseph, Anthony and Lau, Shing–hon and Lee, Steven and Rao, Satish and Tran, Anthony and Tygar, Doug},
  booktitle = {Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics},
  year      = {2010},
  pages     = {549-556},
  volume    = {9},
  url       = {https://mlanthology.org/aistats/2010/nelson2010aistats-nearoptimal/}
}