Estimating Entropy of Distributions in Constant Space
Abstract
We consider the task of estimating the entropy of $k$-ary distributions from samples in the streaming model, where space is limited. Our main contribution is an algorithm that requires $O\left(\frac{k \log (1/\varepsilon)^2}{\varepsilon^3}\right)$ samples and a constant $O(1)$ memory words of space and outputs a $\pm\varepsilon$ estimate of $H(p)$. Without space limitations, the sample complexity has been established as $S(k,\varepsilon)=\Theta\left(\frac k{\varepsilon\log k}+\frac{\log^2 k}{\varepsilon^2}\right)$, which is sub-linear in the domain size $k$, and the current algorithms that achieve optimal sample complexity also require nearly-linear space in $k$. Our algorithm partitions $[0,1]$ into intervals and estimates the entropy contribution of probability values in each interval. The intervals are designed to trade bias and variance. Distribution property estimation and testing with limited memory is a largely unexplored research area. We hope our work will motivate research in this field.
Cite
Text
Acharya et al. "Estimating Entropy of Distributions in Constant Space." Neural Information Processing Systems, 2019.Markdown
[Acharya et al. "Estimating Entropy of Distributions in Constant Space." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/acharya2019neurips-estimating/)BibTeX
@inproceedings{acharya2019neurips-estimating,
title = {{Estimating Entropy of Distributions in Constant Space}},
author = {Acharya, Jayadev and Bhadane, Sourbh and Indyk, Piotr and Sun, Ziteng},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {5162-5173},
url = {https://mlanthology.org/neurips/2019/acharya2019neurips-estimating/}
}