K-Local Hyperplane and Convex Distance Nearest Neighbor Algorithms

Abstract

Guided by an initial idea of building a complex (non linear) decision surface with maximal local margin in input space, we give a possible geometrical intuition as to why K-Nearest Neighbor (KNN) algorithms often perform more poorly than SVMs on classification tasks. We then propose modified K-Nearest Neighbor algorithms to overcome the per- ceived problem. The approach is similar in spirit to Tangent Distance, but with invariances inferred from the local neighborhood rather than prior knowledge. Experimental results on real world classification tasks sug- gest that the modified KNN algorithms often give a dramatic improve- ment over standard KNN and perform as well or better than SVMs.

Cite

Text

Vincent and Bengio. "K-Local Hyperplane and Convex Distance Nearest Neighbor Algorithms." Neural Information Processing Systems, 2001.

Markdown

[Vincent and Bengio. "K-Local Hyperplane and Convex Distance Nearest Neighbor Algorithms." Neural Information Processing Systems, 2001.](https://mlanthology.org/neurips/2001/vincent2001neurips-klocal/)

BibTeX

@inproceedings{vincent2001neurips-klocal,
  title     = {{K-Local Hyperplane and Convex Distance Nearest Neighbor Algorithms}},
  author    = {Vincent, Pascal and Bengio, Yoshua},
  booktitle = {Neural Information Processing Systems},
  year      = {2001},
  pages     = {985-992},
  url       = {https://mlanthology.org/neurips/2001/vincent2001neurips-klocal/}
}