Variance Estimation in Compound Decision Theory Under Boundedness

Abstract

The normal means model is often studied under the assumption of a known variance. However, ignorance of the variance is a frequent issue in applications and basic theoretical questions still remain open in this setting. This article establishes that the sharp minimax rate of variance estimation in square error is $(\frac{\log\log n}{\log n})^2$ under arguably the most mild assumption imposed for identifiability: bounded means. The rate-optimal estimator proposed in this article achieves the optimal rate by estimating $O\left(\frac{\log n}{\log\log n}\right)$ cumulants and leveraging a variational representation of the noise variance in terms of the cumulants of the data distribution. The minimax lower bound involves a moment matching construction.

Cite

Text

Kotekal. "Variance Estimation in Compound Decision Theory Under Boundedness." Neural Information Processing Systems, 2024. doi:10.52202/079017-2919

Markdown

[Kotekal. "Variance Estimation in Compound Decision Theory Under Boundedness." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/kotekal2024neurips-variance/) doi:10.52202/079017-2919

BibTeX

@inproceedings{kotekal2024neurips-variance,
  title     = {{Variance Estimation in Compound Decision Theory Under Boundedness}},
  author    = {Kotekal, Subhodh},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2919},
  url       = {https://mlanthology.org/neurips/2024/kotekal2024neurips-variance/}
}