Estimating Divergence Functionals and the Likelihood Ratio by Penalized Convex Risk Minimization

Abstract

We develop and analyze an algorithm for nonparametric estimation of divergence functionals and the density ratio of two probability distributions. Our method is based on a variational characterization of f-divergences, which turns the estima- tion into a penalized convex risk minimization problem. We present a derivation of our kernel-based estimation algorithm and an analysis of convergence rates for the estimator. Our simulation results demonstrate the convergence behavior of the method, which compares favorably with existing methods in the literature.

Cite

Text

Nguyen et al. "Estimating Divergence Functionals and the Likelihood Ratio by Penalized Convex Risk Minimization." Neural Information Processing Systems, 2007.

Markdown

[Nguyen et al. "Estimating Divergence Functionals and the Likelihood Ratio by Penalized Convex Risk Minimization." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/nguyen2007neurips-estimating/)

BibTeX

@inproceedings{nguyen2007neurips-estimating,
  title     = {{Estimating Divergence Functionals and the Likelihood Ratio by Penalized Convex Risk Minimization}},
  author    = {Nguyen, Xuanlong and Wainwright, Martin J. and Jordan, Michael I.},
  booktitle = {Neural Information Processing Systems},
  year      = {2007},
  pages     = {1089-1096},
  url       = {https://mlanthology.org/neurips/2007/nguyen2007neurips-estimating/}
}