Speculate-Correct Error Bounds for K-Nearest Neighbor Classifiers

Abstract

We introduce the speculate-correct method to derive error bounds for local classifiers. Using it, we show that k -nearest neighbor classifiers, in spite of their famously fractured decision boundaries, have exponential error bounds with $\hbox {O} \left( \sqrt{(k + \ln n)/n} \right) $ O ( k + ln n ) / n range around an estimate of generalization error for n in-sample examples.

Cite

Text

Bax et al. "Speculate-Correct Error Bounds for K-Nearest Neighbor Classifiers." Machine Learning, 2019. doi:10.1007/S10994-019-05814-1

Markdown

[Bax et al. "Speculate-Correct Error Bounds for K-Nearest Neighbor Classifiers." Machine Learning, 2019.](https://mlanthology.org/mlj/2019/bax2019mlj-speculatecorrect/) doi:10.1007/S10994-019-05814-1

BibTeX

@article{bax2019mlj-speculatecorrect,
  title     = {{Speculate-Correct Error Bounds for K-Nearest Neighbor Classifiers}},
  author    = {Bax, Eric and Weng, Lingjie and Tian, Xu},
  journal   = {Machine Learning},
  year      = {2019},
  pages     = {2087-2111},
  doi       = {10.1007/S10994-019-05814-1},
  volume    = {108},
  url       = {https://mlanthology.org/mlj/2019/bax2019mlj-speculatecorrect/}
}