A Constrained Risk Inequality for General Losses

Abstract

We provide a general constrained risk inequality that applies to arbitrary non-decreasing losses, extending a result of Brown and Low [\emph{Ann. Stat. 1996}]. Given two distributions $P_0$ and $P_1$, we find a lower bound for the risk of estimating a parameter $\theta(P_1)$ under $P_1$ given an upper bound on the risk of estimating the parameter $\theta(P_0)$ under $P_0$. The inequality is a useful pedagogical tool, as its proof relies only on the Cauchy-Schwartz inequality, it applies to general losses, and it transparently gives risk lower bounds on super-efficient and adaptive estimators.

Cite

Text

Duchi and Ruan. "A Constrained Risk Inequality for General Losses." Artificial Intelligence and Statistics, 2021.

Markdown

[Duchi and Ruan. "A Constrained Risk Inequality for General Losses." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/duchi2021aistats-constrained/)

BibTeX

@inproceedings{duchi2021aistats-constrained,
  title     = {{A Constrained Risk Inequality for General Losses}},
  author    = {Duchi, John and Ruan, Feng},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2021},
  pages     = {802-810},
  volume    = {130},
  url       = {https://mlanthology.org/aistats/2021/duchi2021aistats-constrained/}
}