Achieving the Time of 1-NN, but the Accuracy of k-NN
Abstract
We propose a simple approach which, given distributed computing resources, can nearly achieve the accuracy of $k$-NN prediction, while matching (or improving) the faster prediction time of $1$-NN. The approach consists of aggregating denoised $1$-NN predictors over a small number of distributed subsamples. We show, both theoretically and experimentally, that small subsample sizes suffice to attain similar performance as $k$-NN, without sacrificing the computational efficiency of $1$-NN.
Cite
Text
Xue and Kpotufe. "Achieving the Time of 1-NN, but the Accuracy of k-NN." International Conference on Artificial Intelligence and Statistics, 2018.Markdown
[Xue and Kpotufe. "Achieving the Time of 1-NN, but the Accuracy of k-NN." International Conference on Artificial Intelligence and Statistics, 2018.](https://mlanthology.org/aistats/2018/xue2018aistats-achieving/)BibTeX
@inproceedings{xue2018aistats-achieving,
title = {{Achieving the Time of 1-NN, but the Accuracy of k-NN}},
author = {Xue, Lirong and Kpotufe, Samory},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2018},
pages = {1628-1636},
url = {https://mlanthology.org/aistats/2018/xue2018aistats-achieving/}
}