Adversarial Examples for K-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams

Abstract

Adversarial examples are a widely studied phenomenon in machine learning models. While most of the attention has been focused on neural networks, other practical models also suffer from this issue. In this work, we propose an algorithm for evaluating the adversarial robustness of $k$-nearest neighbor classification, i.e., finding a minimum-norm adversarial example. Diverging from previous proposals, we propose the first geometric approach by performing a search that expands outwards from a given input point. On a high level, the search radius expands to the nearby higher-order Voronoi cells until we find a cell that classifies differently from the input point. To scale the algorithm to a large $k$, we introduce approximation steps that find perturbation with smaller norm, compared to the baselines, in a variety of datasets. Furthermore, we analyze the structural properties of a dataset where our approach outperforms the competition.

Cite

Text

Sitawarin et al. "Adversarial Examples for K-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams." Neural Information Processing Systems, 2021.

Markdown

[Sitawarin et al. "Adversarial Examples for K-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/sitawarin2021neurips-adversarial/)

BibTeX

@inproceedings{sitawarin2021neurips-adversarial,
  title     = {{Adversarial Examples for K-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams}},
  author    = {Sitawarin, Chawin and Kornaropoulos, Evgenios and Song, Dawn and Wagner, David},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/sitawarin2021neurips-adversarial/}
}