On the Mutual Nearest Neighbors Estimate in Regression
Abstract
Motivated by promising experimental results, this paper investigates the theoretical properties of a recently proposed nonparametric estimator, called the Mutual Nearest Neighbors rule, which estimates the regression function $m(\mathbf{x})=\mathbb E[Y|\mathbf{X}=\mathbf{x}]$ as follows: first identify the $k$ nearest neighbors of $\mathbf{x}$ in the sample $\mathcal{D}_n$, then keep only those for which $\mathbf{x}$ is itself one of the $k$ nearest neighbors, and finally take the average over the corresponding response variables. We prove that this estimator is consistent and that its rate of convergence is optimal. Since the estimate with the optimal rate of convergence depends on the unknown distribution of the observations, we also present adaptation results by data- splitting.
Cite
Text
Guyader and Hengartner. "On the Mutual Nearest Neighbors Estimate in Regression." Journal of Machine Learning Research, 2013.Markdown
[Guyader and Hengartner. "On the Mutual Nearest Neighbors Estimate in Regression." Journal of Machine Learning Research, 2013.](https://mlanthology.org/jmlr/2013/guyader2013jmlr-mutual/)BibTeX
@article{guyader2013jmlr-mutual,
title = {{On the Mutual Nearest Neighbors Estimate in Regression}},
author = {Guyader, Arnaud and Hengartner, Nick},
journal = {Journal of Machine Learning Research},
year = {2013},
pages = {2361-2376},
volume = {14},
url = {https://mlanthology.org/jmlr/2013/guyader2013jmlr-mutual/}
}