Bregman Divergence as General Framework to Estimate Unnormalized Statistical Models

Abstract

We show that the Bregman divergence provides a rich framework to estimate unnormalized statistical models for continuous or discrete random variables, that is, models which do not integrate or sum to one, respectively. We prove that recent estimation methods such as noise-contrastive estimation, ratio matching, and score matching belong to the proposed framework, and explain their interconnection based on supervised learning. Further, we discuss the role of boosting in unsupervised learning.

Cite

Text

Gutmann and Hirayama. "Bregman Divergence as General Framework to Estimate Unnormalized Statistical Models." Conference on Uncertainty in Artificial Intelligence, 2011.

Markdown

[Gutmann and Hirayama. "Bregman Divergence as General Framework to Estimate Unnormalized Statistical Models." Conference on Uncertainty in Artificial Intelligence, 2011.](https://mlanthology.org/uai/2011/gutmann2011uai-bregman/)

BibTeX

@inproceedings{gutmann2011uai-bregman,
  title     = {{Bregman Divergence as General Framework to Estimate Unnormalized Statistical Models}},
  author    = {Gutmann, Michael and Hirayama, Junichiro},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2011},
  pages     = {283-290},
  url       = {https://mlanthology.org/uai/2011/gutmann2011uai-bregman/}
}