PAC-Bayesian Learning of Optimization Algorithms
Abstract
We apply the PAC-Bayes theory to the setting of learning-to-optimize. To the best of our knowledge, we present the first framework to learn optimization algorithms with provable generalization guarantees (PAC-bounds) and explicit trade-off between a high probability of convergence and a high convergence speed. Even in the limit case, where convergence is guaranteed, our learned optimization algorithms provably outperform related algorithms based on a (deterministic) worst-case analysis. Our results rely on PAC-Bayes bounds for general, unbounded loss-functions based on exponential families. By generalizing existing ideas, we reformulate the learning procedure into a one-dimensional minimization problem and study the possibility to find a global minimum, which enables the algorithmic realization of the learning procedure. As a proof-of-concept, we learn hyperparameters of standard optimization algorithms to empirically underline our theory.
Cite
Text
Sucker and Ochs. "PAC-Bayesian Learning of Optimization Algorithms." Artificial Intelligence and Statistics, 2023.Markdown
[Sucker and Ochs. "PAC-Bayesian Learning of Optimization Algorithms." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/sucker2023aistats-pacbayesian/)BibTeX
@inproceedings{sucker2023aistats-pacbayesian,
title = {{PAC-Bayesian Learning of Optimization Algorithms}},
author = {Sucker, Michael and Ochs, Peter},
booktitle = {Artificial Intelligence and Statistics},
year = {2023},
pages = {8145-8164},
volume = {206},
url = {https://mlanthology.org/aistats/2023/sucker2023aistats-pacbayesian/}
}