Generalization of Hamiltonian Algorithms

Abstract

A method to prove generalization results for a class of stochastic learning algorithms is presented. It applies whenever the algorithm generates a distribution, which is absolutely continuous distribution relative to some a-priori measure, and the logarithm of its density is exponentially concentrated about its mean. Applications include bounds for the Gibbs algorithm and randomizations of stable deterministic algorithms, combinations thereof and PAC-Bayesian bounds with data-dependent priors.

Cite

Text

Maurer. "Generalization of Hamiltonian Algorithms." Neural Information Processing Systems, 2024. doi:10.52202/079017-0834

Markdown

[Maurer. "Generalization of Hamiltonian Algorithms." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/maurer2024neurips-generalization/) doi:10.52202/079017-0834

BibTeX

@inproceedings{maurer2024neurips-generalization,
  title     = {{Generalization of Hamiltonian Algorithms}},
  author    = {Maurer, Andreas},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0834},
  url       = {https://mlanthology.org/neurips/2024/maurer2024neurips-generalization/}
}