Information-Theoretic Analysis of Generalization Capability of Learning Algorithms
Abstract
We derive upper bounds on the generalization error of a learning algorithm in terms of the mutual information between its input and output. The bounds provide an information-theoretic understanding of generalization in learning problems, and give theoretical guidelines for striking the right balance between data fit and generalization by controlling the input-output mutual information. We propose a number of methods for this purpose, among which are algorithms that regularize the ERM algorithm with relative entropy or with random noise. Our work extends and leads to nontrivial improvements on the recent results of Russo and Zou.
Cite
Text
Xu and Raginsky. "Information-Theoretic Analysis of Generalization Capability of Learning Algorithms." Neural Information Processing Systems, 2017.Markdown
[Xu and Raginsky. "Information-Theoretic Analysis of Generalization Capability of Learning Algorithms." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/xu2017neurips-informationtheoretic/)BibTeX
@inproceedings{xu2017neurips-informationtheoretic,
title = {{Information-Theoretic Analysis of Generalization Capability of Learning Algorithms}},
author = {Xu, Aolin and Raginsky, Maxim},
booktitle = {Neural Information Processing Systems},
year = {2017},
pages = {2524-2533},
url = {https://mlanthology.org/neurips/2017/xu2017neurips-informationtheoretic/}
}