Unifying Divergence Minimization and Statistical Inference via Convex Duality
Abstract
In this paper we unify divergence minimization and statistical inference by means of convex duality. In the process of doing so, we prove that the dual of approximate maximum entropy estimation is maximum a posteriori estimation as a special case. Moreover, our treatment leads to stability and convergence bounds for many statistical learning problems. Finally, we show how an algorithm by Zhang can be used to solve this class of optimization problems efficiently.
Cite
Text
Altun and Smola. "Unifying Divergence Minimization and Statistical Inference via Convex Duality." Annual Conference on Computational Learning Theory, 2006. doi:10.1007/11776420_13Markdown
[Altun and Smola. "Unifying Divergence Minimization and Statistical Inference via Convex Duality." Annual Conference on Computational Learning Theory, 2006.](https://mlanthology.org/colt/2006/altun2006colt-unifying/) doi:10.1007/11776420_13BibTeX
@inproceedings{altun2006colt-unifying,
title = {{Unifying Divergence Minimization and Statistical Inference via Convex Duality}},
author = {Altun, Yasemin and Smola, Alexander J.},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2006},
pages = {139-153},
doi = {10.1007/11776420_13},
url = {https://mlanthology.org/colt/2006/altun2006colt-unifying/}
}