A General Framework for the Practical Disintegration of PAC-Bayesian Bounds

Abstract

PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability of randomized classifiers. However, they require a loose and costly derandomization step when applied to some families of deterministic models such as neural networks. As an alternative to this step, we introduce new PAC-Bayesian generalization bounds that have the originality to provide disintegrated bounds, i.e., they give guarantees over one single hypothesis instead of the usual averaged analysis. Our bounds are easily optimizable and can be used to design learning algorithms. We illustrate this behavior on neural networks, and we show a significant practical improvement over the state-of-the-art framework.

Cite

Text

Viallard et al. "A General Framework for the Practical Disintegration of PAC-Bayesian Bounds." Machine Learning, 2024. doi:10.1007/S10994-023-06391-0

Markdown

[Viallard et al. "A General Framework for the Practical Disintegration of PAC-Bayesian Bounds." Machine Learning, 2024.](https://mlanthology.org/mlj/2024/viallard2024mlj-general/) doi:10.1007/S10994-023-06391-0

BibTeX

@article{viallard2024mlj-general,
  title     = {{A General Framework for the Practical Disintegration of PAC-Bayesian Bounds}},
  author    = {Viallard, Paul and Germain, Pascal and Habrard, Amaury and Morvant, Emilie},
  journal   = {Machine Learning},
  year      = {2024},
  pages     = {519-604},
  doi       = {10.1007/S10994-023-06391-0},
  volume    = {113},
  url       = {https://mlanthology.org/mlj/2024/viallard2024mlj-general/}
}