The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling

Abstract

We prove that, for a broad range of problems, maximum-a-posteriori (MAP) estimation and approximate sampling of the posterior are at least as computationally difficult as maximum-likelihood (ML) estimation. By way of illustration, we show how hardness results for ML estimation of mixtures of Gaussians and topic models carry over to MAP estimation and approximate sampling under commonly used priors.

Cite

Text

Tosh and Dasgupta. "The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling." Conference on Learning Theory, 2019.

Markdown

[Tosh and Dasgupta. "The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling." Conference on Learning Theory, 2019.](https://mlanthology.org/colt/2019/tosh2019colt-relative/)

BibTeX

@inproceedings{tosh2019colt-relative,
  title     = {{The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling}},
  author    = {Tosh, Christopher and Dasgupta, Sanjoy},
  booktitle = {Conference on Learning Theory},
  year      = {2019},
  pages     = {2993-3035},
  volume    = {99},
  url       = {https://mlanthology.org/colt/2019/tosh2019colt-relative/}
}