T-Divergence Based Approximate Inference
Abstract
Approximate inference is an important technique for dealing with large, intractable graphical models based on the exponential family of distributions. We extend the idea of approximate inference to the t-exponential family by defining a new t-divergence. This divergence measure is obtained via convex duality between the log-partition function of the t-exponential family and a new t-entropy. We illustrate our approach on the Bayes Point Machine with a Student's t-prior.
Cite
Text
Ding et al. "T-Divergence Based Approximate Inference." Neural Information Processing Systems, 2011.Markdown
[Ding et al. "T-Divergence Based Approximate Inference." Neural Information Processing Systems, 2011.](https://mlanthology.org/neurips/2011/ding2011neurips-tdivergence/)BibTeX
@inproceedings{ding2011neurips-tdivergence,
title = {{T-Divergence Based Approximate Inference}},
author = {Ding, Nan and Qi, Yuan and Vishwanathan, S.v.n.},
booktitle = {Neural Information Processing Systems},
year = {2011},
pages = {1494-1502},
url = {https://mlanthology.org/neurips/2011/ding2011neurips-tdivergence/}
}