Likelihood Score Under Generalized Self-Concordance

Abstract

We show how, under a generalized self-concordance assumption and possible model misspecification, we can establish non-asymptotic bounds on the normalized likelihood score when using maximum likelihood or score matching. The tail behavior is governed by an effective dimension corresponding to the trace of the sandwich covariance. We also show how our non-asymptotic approach allows us to obtain confidence bounds for the estimator and analyze Rao's score test.

Cite

Text

Liu and Harchaoui. "Likelihood Score Under Generalized Self-Concordance." NeurIPS 2022 Workshops: SBM, 2022.

Markdown

[Liu and Harchaoui. "Likelihood Score Under Generalized Self-Concordance." NeurIPS 2022 Workshops: SBM, 2022.](https://mlanthology.org/neuripsw/2022/liu2022neuripsw-likelihood/)

BibTeX

@inproceedings{liu2022neuripsw-likelihood,
  title     = {{Likelihood Score Under Generalized Self-Concordance}},
  author    = {Liu, Lang and Harchaoui, Zaid},
  booktitle = {NeurIPS 2022 Workshops: SBM},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/liu2022neuripsw-likelihood/}
}