Which Algorithms Have Tight Generalization Bounds?
Abstract
We study which machine learning algorithms have tight generalization bounds with respect to a given collection of population distributions. Our results build on and extend the recent work of Gastpar et al. (2023). First, we present conditions that preclude the existence of tight generalization bounds. Specifically, we show that algorithms that have certain inductive biases that cause them to be unstable do not admit tight generalization bounds. Next, we show that algorithms that are sufficiently loss-stable do have tight generalization bounds. We conclude with a simple characterization that relates the existence of tight generalization bounds to the conditional variance of the algorithm's loss.
Cite
Text
Gastpar et al. "Which Algorithms Have Tight Generalization Bounds?." Advances in Neural Information Processing Systems, 2025.Markdown
[Gastpar et al. "Which Algorithms Have Tight Generalization Bounds?." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/gastpar2025neurips-algorithms/)BibTeX
@inproceedings{gastpar2025neurips-algorithms,
title = {{Which Algorithms Have Tight Generalization Bounds?}},
author = {Gastpar, Michael and Nachum, Ido and Shafer, Jonathan and Weinberger, Thomas},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/gastpar2025neurips-algorithms/}
}