Dimension-Free Information Concentration via Exp-Concavity
Abstract
Information concentration of probability measures have important implications in learning theory. Recently, it is discovered that the information content of a log-concave distribution concentrates around their differential entropy, albeit with an unpleasant dependence on the ambient dimension. In this work, we prove that if the potentials of the log-concave distribution are \emph{exp-concave}, which is a central notion for fast rates in online and statistical learning, then the concentration of information can be further improved to depend only on the exp-concavity parameter, and hence can be dimension independent. Central to our proof is a novel yet simple application of the variance Brascamp-Lieb inequality. In the context of learning theory, concentration of information immediately implies high-probability results to many of the previous bounds that only hold in expectation.
Cite
Text
Hsieh and Cevher. "Dimension-Free Information Concentration via Exp-Concavity." Proceedings of Algorithmic Learning Theory, 2018.Markdown
[Hsieh and Cevher. "Dimension-Free Information Concentration via Exp-Concavity." Proceedings of Algorithmic Learning Theory, 2018.](https://mlanthology.org/alt/2018/hsieh2018alt-dimensionfree/)BibTeX
@inproceedings{hsieh2018alt-dimensionfree,
title = {{Dimension-Free Information Concentration via Exp-Concavity}},
author = {Hsieh, Ya-ping and Cevher, Volkan},
booktitle = {Proceedings of Algorithmic Learning Theory},
year = {2018},
pages = {451-469},
volume = {83},
url = {https://mlanthology.org/alt/2018/hsieh2018alt-dimensionfree/}
}