Simpler PAC-Bayesian Bounds for Hostile Data

Abstract

PAC-Bayesian learning bounds are of the utmost interest to the learning community. Their role is to connect the generalization ability of an aggregation distribution ρ\documentclass[12pt]minimal \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}-69pt \begin{document}$\rho $\end{document} to its empirical risk and to its Kullback-Leibler divergence with respect to some prior distribution π\documentclass[12pt]minimal \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}-69pt \begin{document}$\pi $\end{document}. Unfortunately, most of the available bounds typically rely on heavy assumptions such as boundedness and independence of the observations. This paper aims at relaxing these constraints and provides PAC-Bayesian learning bounds that hold for dependent, heavy-tailed observations (hereafter referred to as hostile data). In these bounds the Kullack-Leibler divergence is replaced with a general version of Csiszár’s f-divergence. We prove a general PAC-Bayesian bound, and show how to use it in various hostile settings.

Cite

Text

Alquier and Guedj. "Simpler PAC-Bayesian Bounds for Hostile Data." Machine Learning, 2018. doi:10.1007/S10994-017-5690-0

Markdown

[Alquier and Guedj. "Simpler PAC-Bayesian Bounds for Hostile Data." Machine Learning, 2018.](https://mlanthology.org/mlj/2018/alquier2018mlj-simpler/) doi:10.1007/S10994-017-5690-0

BibTeX

@article{alquier2018mlj-simpler,
  title     = {{Simpler PAC-Bayesian Bounds for Hostile Data}},
  author    = {Alquier, Pierre and Guedj, Benjamin},
  journal   = {Machine Learning},
  year      = {2018},
  pages     = {887-902},
  doi       = {10.1007/S10994-017-5690-0},
  volume    = {107},
  url       = {https://mlanthology.org/mlj/2018/alquier2018mlj-simpler/}
}