Finite-Sample Analysis of Fixed-K Nearest Neighbor Density Functional Estimators
Abstract
We provide finite-sample analysis of a general framework for using k-nearest neighbor statistics to estimate functionals of a nonparametric continuous probability density, including entropies and divergences. Rather than plugging a consistent density estimate (which requires k → ∞ as the sample size n → ∞) into the functional of interest, the estimators we consider fix k and perform a bias correction. This can be more efficient computationally, and, as we show, statistically, leading to faster convergence rates. Our framework unifies several previous estimators, for most of which ours are the first finite sample guarantees.
Cite
Text
Singh and Poczos. "Finite-Sample Analysis of Fixed-K Nearest Neighbor Density Functional Estimators." Neural Information Processing Systems, 2016.Markdown
[Singh and Poczos. "Finite-Sample Analysis of Fixed-K Nearest Neighbor Density Functional Estimators." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/singh2016neurips-finitesample/)BibTeX
@inproceedings{singh2016neurips-finitesample,
title = {{Finite-Sample Analysis of Fixed-K Nearest Neighbor Density Functional Estimators}},
author = {Singh, Shashank and Poczos, Barnabas},
booktitle = {Neural Information Processing Systems},
year = {2016},
pages = {1217-1225},
url = {https://mlanthology.org/neurips/2016/singh2016neurips-finitesample/}
}