Stability and Sharper Risk Bounds with Convergence Rate $\tilde{O}(1/n^2)$

Abstract

Prior work (Klochkov \& Zhivotovskiy, 2021) establishes at most $O\left(\log (n)/n\right)$ excess risk bounds via algorithmic stability for strongly-convex learners with high probability. We show that under the similar common assumptions — Polyak-Lojasiewicz condition, smoothness, and Lipschitz continous for losses — rates of $O\left(\log^2(n)/n^2\right)$ are at most achievable. To our knowledge, our analysis also provides the tightest high-probability bounds for gradient-based generalization gaps in nonconvex settings.

Cite

Text

Zhu et al. "Stability and Sharper Risk Bounds with Convergence Rate $\tilde{O}(1/n^2)$." Advances in Neural Information Processing Systems, 2025.

Markdown

[Zhu et al. "Stability and Sharper Risk Bounds with Convergence Rate $\tilde{O}(1/n^2)$." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhu2025neurips-stability/)

BibTeX

@inproceedings{zhu2025neurips-stability,
  title     = {{Stability and Sharper Risk Bounds with Convergence Rate $\tilde{O}(1/n^2)$}},
  author    = {Zhu, Bowei and Li, Shaojie and Yi, Mingyang and Liu, Yong},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/zhu2025neurips-stability/}
}