Scalable Hash-Based Estimation of Divergence Measures
Abstract
We propose a scalable divergence estimation method based on hashing. Consider two continuous random variables $X$ and $Y$ whose densities have bounded support. We consider a particular locality sensitive random hashing, and consider the ratio of samples in each hash bin having non-zero numbers of Y samples. We prove that the weighted average of these ratios over all of the hash bins converges to f-divergences between the two samples sets. We show that the proposed estimator is optimal in terms of both MSE rate and computational complexity. We derive the MSE rates for two families of smooth functions; the Hölder smoothness class and differentiable functions. In particular, it is proved that if the density functions have bounded derivatives up to the order $d/2$, where $d$ is the dimension of samples, the optimal parametric MSE rate of $O(1/N)$ can be achieved. The computational complexity is shown to be $O(N)$, which is optimal. To the best of our knowledge, this is the first empirical divergence estimator that has optimal computational complexity and achieves the optimal parametric MSE estimation rate.
Cite
Text
Noshad and Iii. "Scalable Hash-Based Estimation of Divergence Measures." International Conference on Artificial Intelligence and Statistics, 2018. doi:10.1109/ITA.2018.8503092Markdown
[Noshad and Iii. "Scalable Hash-Based Estimation of Divergence Measures." International Conference on Artificial Intelligence and Statistics, 2018.](https://mlanthology.org/aistats/2018/noshad2018aistats-scalable/) doi:10.1109/ITA.2018.8503092BibTeX
@inproceedings{noshad2018aistats-scalable,
title = {{Scalable Hash-Based Estimation of Divergence Measures}},
author = {Noshad, Morteza and Iii, Alfred O. Hero},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2018},
pages = {1877-1885},
doi = {10.1109/ITA.2018.8503092},
url = {https://mlanthology.org/aistats/2018/noshad2018aistats-scalable/}
}