Generalization Bounds for Mixing Processes via Delayed Online-to-PAC Conversions
Abstract
We study the generalization error of statistical learning algorithms in a non i.i.d. setting, where the training data is sampled from a stationary mixing process. We develop an analytic framework for this scenario based on a reduction to online learning with delayed feedback. In particular, we show that the existence of an online learning algorithm with bounded regret (against a fixed statistical learning algorithm in a specially constructed game of online learning with delayed feedback) implies low generalization error of said statistical learning method even if the data sequence is sampled from a mixing time series. The rates demonstrate a trade-studied settings when the delay is tuned appropriately as a function of the mixing time of the process.
Cite
Text
Abélès et al. "Generalization Bounds for Mixing Processes via Delayed Online-to-PAC Conversions." Proceedings of The 36th International Conference on Algorithmic Learning Theory, 2025.Markdown
[Abélès et al. "Generalization Bounds for Mixing Processes via Delayed Online-to-PAC Conversions." Proceedings of The 36th International Conference on Algorithmic Learning Theory, 2025.](https://mlanthology.org/alt/2025/abeles2025alt-generalization/)BibTeX
@inproceedings{abeles2025alt-generalization,
title = {{Generalization Bounds for Mixing Processes via Delayed Online-to-PAC Conversions}},
author = {Abélès, Baptiste and Clerico, Eugenio and Neu, Gergely},
booktitle = {Proceedings of The 36th International Conference on Algorithmic Learning Theory},
year = {2025},
pages = {23--40},
volume = {272},
url = {https://mlanthology.org/alt/2025/abeles2025alt-generalization/}
}