Conditionally Gaussian PAC-Bayes
Abstract
Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent. Most of these procedures need to replace the misclassification error with a surrogate loss, leading to a mismatch between the optimisation objective and the actual generalisation bound. The present paper proposes a novel training algorithm that optimises the PAC-Bayesian bound, without relying on any surrogate loss. Empirical results show that this approach outperforms currently available PAC-Bayesian training methods.
Cite
Text
Clerico et al. "Conditionally Gaussian PAC-Bayes." Artificial Intelligence and Statistics, 2022.Markdown
[Clerico et al. "Conditionally Gaussian PAC-Bayes." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/clerico2022aistats-conditionally/)BibTeX
@inproceedings{clerico2022aistats-conditionally,
title = {{Conditionally Gaussian PAC-Bayes}},
author = {Clerico, Eugenio and Deligiannidis, George and Doucet, Arnaud},
booktitle = {Artificial Intelligence and Statistics},
year = {2022},
pages = {2311-2329},
volume = {151},
url = {https://mlanthology.org/aistats/2022/clerico2022aistats-conditionally/}
}