Doubly-Competitive Distribution Estimation

Abstract

Distribution estimation is a statistical-learning cornerstone. Its classical min-max formulation minimizes the estimation error for the worst distribution, hence under-performs for practical distributions that, like power-law, are often rather simple. Modern research has therefore focused on two frameworks: structural estimation that improves learning accuracy by assuming a simple structure of the underlying distribution; and competitive, or instance-optimal, estimation that achieves the performance of a genie aided estimator up to a small excess error that vanishes as the sample size grows, regardless of the distribution. This paper combines and strengthens the two frameworks. It designs a single estimator whose excess error vanishes both at a universal rate as the sample size grows, as well as when the (unknown) distribution gets simpler. We show that the resulting algorithm significantly improves the performance guarantees for numerous competitive- and structural-estimation results. The algorithm runs in near-linear time and is robust to model misspecification and domain-symbol permutations.

Cite

Text

Hao and Orlitsky. "Doubly-Competitive Distribution Estimation." International Conference on Machine Learning, 2019.

Markdown

[Hao and Orlitsky. "Doubly-Competitive Distribution Estimation." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/hao2019icml-doublycompetitive/)

BibTeX

@inproceedings{hao2019icml-doublycompetitive,
  title     = {{Doubly-Competitive Distribution Estimation}},
  author    = {Hao, Yi and Orlitsky, Alon},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {2614-2623},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/hao2019icml-doublycompetitive/}
}