Chained Generalisation Bounds

Abstract

This work discusses how to derive upper bounds for the expected generalisation error of supervised learning algorithms by means of the chaining technique. By developing a general theoretical framework, we establish a duality between generalisation bounds based on the regularity of the loss function, and their chained counterparts, which can be obtained by lifting the regularity assumption from the loss onto its gradient. This allows us to re-derive the chaining mutual information bound from the literature, and to obtain novel chained information-theoretic generalisation bounds, based on the Wasserstein distance and other probability metrics. We show on some toy examples that the chained generalisation bound can be significantly tighter than its standard counterpart, particularly when the distribution of the hypotheses selected by the algorithm is very concentrated.

Cite

Text

Clerico et al. "Chained Generalisation Bounds." Conference on Learning Theory, 2022.

Markdown

[Clerico et al. "Chained Generalisation Bounds." Conference on Learning Theory, 2022.](https://mlanthology.org/colt/2022/clerico2022colt-chained/)

BibTeX

@inproceedings{clerico2022colt-chained,
  title     = {{Chained Generalisation Bounds}},
  author    = {Clerico, Eugenio and Shidani, Amitis and Deligiannidis, George and Doucet, Arnaud},
  booktitle = {Conference on Learning Theory},
  year      = {2022},
  pages     = {4212-4257},
  volume    = {178},
  url       = {https://mlanthology.org/colt/2022/clerico2022colt-chained/}
}