Sharper Bounds for Uniformly Stable Algorithms with Stationary Mixing Process
Abstract
Generalization analysis of learning algorithms often builds on a critical assumption that training examples are independently and identically distributed, which is often violated in practical problems such as time series prediction. In this paper, we use algorithmic stability to study the generalization performance of learning algorithms with $\psi$-mixing data, where the dependency between observations weakens over time. We show uniformly stable algorithms guarantee high-probability generalization bounds of the order $O(1/\sqrt{n})$ (within a logarithmic factor), where $n$ is the sample size. We apply our general result to specific algorithms including regularization schemes, stochastic gradient descent and localized iterative regularization, and develop excess population risk bounds for learning with $\psi$-mixing data. Our analysis builds on a novel moment bound for weakly-dependent random variables on a $\varphi$-mixing sequence and a novel error decomposition of generalization error.
Cite
Text
Fu et al. "Sharper Bounds for Uniformly Stable Algorithms with Stationary Mixing Process." International Conference on Learning Representations, 2023.Markdown
[Fu et al. "Sharper Bounds for Uniformly Stable Algorithms with Stationary Mixing Process." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/fu2023iclr-sharper/)BibTeX
@inproceedings{fu2023iclr-sharper,
title = {{Sharper Bounds for Uniformly Stable Algorithms with Stationary Mixing Process}},
author = {Fu, Shi and Lei, Yunwen and Cao, Qiong and Tian, Xinmei and Tao, Dacheng},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/fu2023iclr-sharper/}
}