Ensembles of Nearest Neighbor Forecasts
Abstract
Nearest neighbor forecasting models are attractive with their simplicity and the ability to predict complex nonlinear behavior. They rely on the assumption that observations similar to the target one are also likely to have similar outcomes. A common practice in nearest neighbor model selection is to compute the globally optimal number of neighbors on a validation set, which is later applied for all incoming queries. For certain queries, however, this number may be suboptimal and forecasts that deviate a lot from the true realization could be produced. To address the problem we propose an alternative approach of training ensembles of nearest neighbor predictors that determine the best number of neighbors for individual queries. We demonstrate that the forecasts of the ensembles improve significantly on the globally optimal single predictors.
Cite
Text
Yankov et al. "Ensembles of Nearest Neighbor Forecasts." European Conference on Machine Learning, 2006. doi:10.1007/11871842_51Markdown
[Yankov et al. "Ensembles of Nearest Neighbor Forecasts." European Conference on Machine Learning, 2006.](https://mlanthology.org/ecmlpkdd/2006/yankov2006ecml-ensembles/) doi:10.1007/11871842_51BibTeX
@inproceedings{yankov2006ecml-ensembles,
title = {{Ensembles of Nearest Neighbor Forecasts}},
author = {Yankov, Dragomir and DeCoste, Dennis and Keogh, Eamonn J.},
booktitle = {European Conference on Machine Learning},
year = {2006},
pages = {545-556},
doi = {10.1007/11871842_51},
url = {https://mlanthology.org/ecmlpkdd/2006/yankov2006ecml-ensembles/}
}