Time-Accuracy Tradeoffs in Kernel Prediction: Controlling Prediction Quality

Abstract

Kernel regression or classification (also referred to as weighted $\epsilon$-NN methods in Machine Learning) are appealing for their simplicity and therefore ubiquitous in data analysis. However, practical implementations of kernel regression or classification consist of quantizing or sub- sampling data for improving time efficiency, often at the cost of prediction quality. While such tradeoffs are necessary in practice, their statistical implications are generally not well understood, hence practical implementations come with few performance guarantees. In particular, it is unclear whether it is possible to maintain the statistical accuracy of kernel prediction---crucial in some applications---while improving prediction time.

Cite

Text

Kpotufe and Verma. "Time-Accuracy Tradeoffs in Kernel Prediction: Controlling Prediction Quality." Journal of Machine Learning Research, 2017.

Markdown

[Kpotufe and Verma. "Time-Accuracy Tradeoffs in Kernel Prediction: Controlling Prediction Quality." Journal of Machine Learning Research, 2017.](https://mlanthology.org/jmlr/2017/kpotufe2017jmlr-timeaccuracy/)

BibTeX

@article{kpotufe2017jmlr-timeaccuracy,
  title     = {{Time-Accuracy Tradeoffs in Kernel Prediction: Controlling Prediction Quality}},
  author    = {Kpotufe, Samory and Verma, Nakul},
  journal   = {Journal of Machine Learning Research},
  year      = {2017},
  pages     = {1-29},
  volume    = {18},
  url       = {https://mlanthology.org/jmlr/2017/kpotufe2017jmlr-timeaccuracy/}
}