Generalised Pinsker Inequalities

Abstract

We generalise the classical Pinsker inequality which relates variational divergence to Kullback-Liebler divergence in two ways: we consider arbitrary f-divergences in place of KL divergence, and we assume knowledge of a sequence of values of generalised variational divergences. We then develop a best possible inequality for this doubly generalised situation. Specialising our result to the classical case provides a new and tight explicit bound relating KL to variational divergence (solving a problem posed by Vajda some 40 years ago). The solution relies on exploiting a connection between divergences and the Bayes risk of a learning problem via an integral representation.

Cite

Text

Reid and Williamson. "Generalised Pinsker Inequalities." Annual Conference on Computational Learning Theory, 2009.

Markdown

[Reid and Williamson. "Generalised Pinsker Inequalities." Annual Conference on Computational Learning Theory, 2009.](https://mlanthology.org/colt/2009/reid2009colt-generalised/)

BibTeX

@inproceedings{reid2009colt-generalised,
  title     = {{Generalised Pinsker Inequalities}},
  author    = {Reid, Mark D. and Williamson, Robert C.},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2009},
  url       = {https://mlanthology.org/colt/2009/reid2009colt-generalised/}
}