Finite-Sample Maximum Likelihood Estimation of Location

Abstract

We consider 1-dimensional location estimation, where we estimate a parameter $\lambda$ from $n$ samples $\lambda + \eta_i$, with each $\eta_i$ drawn i.i.d. from a known distribution $f$. For fixed $f$ the maximum-likelihood estimate (MLE) is well-known to be optimal in the limit as $n \to \infty$: it is asymptotically normal with variance matching the Cramer-Rao lower bound of $\frac{1}{n\mathcal{I}}$, where $\mathcal{I}$ is the Fisher information of $f$. However, this bound does not hold for finite $n$, or when $f$ varies with $n$. We show for arbitrary $f$ and $n$ that one can recover a similar theory based on the Fisher information of a smoothed version of $f$, where the smoothing radius decays with $n$.

Cite

Text

Gupta et al. "Finite-Sample Maximum Likelihood Estimation of Location." Neural Information Processing Systems, 2022.

Markdown

[Gupta et al. "Finite-Sample Maximum Likelihood Estimation of Location." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/gupta2022neurips-finitesample/)

BibTeX

@inproceedings{gupta2022neurips-finitesample,
  title     = {{Finite-Sample Maximum Likelihood Estimation of Location}},
  author    = {Gupta, Shivam and Lee, Jasper and Ecprice,  and Valiant, Paul},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/gupta2022neurips-finitesample/}
}