Localized Upper and Lower Bounds for Some Estimation Problems

Abstract

We derive upper and lower bounds for some statistical estimation problems. The upper bounds are established for the Gibbs algorithm. The lower bounds, applicable for all statistical estimators, match the obtained upper bounds for various problems. Moreover, our framework can be regarded as a natural generalization of the standard minimax framework, in that we allow the performance of the estimator to vary for different possible underlying distributions according to a pre-defined prior.

Cite

Text

Zhang. "Localized Upper and Lower Bounds for Some Estimation Problems." Annual Conference on Computational Learning Theory, 2005. doi:10.1007/11503415_35

Markdown

[Zhang. "Localized Upper and Lower Bounds for Some Estimation Problems." Annual Conference on Computational Learning Theory, 2005.](https://mlanthology.org/colt/2005/zhang2005colt-localized/) doi:10.1007/11503415_35

BibTeX

@inproceedings{zhang2005colt-localized,
  title     = {{Localized Upper and Lower Bounds for Some Estimation Problems}},
  author    = {Zhang, Tong},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2005},
  pages     = {516-530},
  doi       = {10.1007/11503415_35},
  url       = {https://mlanthology.org/colt/2005/zhang2005colt-localized/}
}