Learning from Time-Dependent Streaming Data with Online Stochastic Algorithms
Abstract
This paper addresses stochastic optimization in a streaming setting with time-dependent and biased gradient estimates. We analyze several first-order methods, including Stochastic Gradient Descent (SGD), mini-batch SGD, and time-varying mini-batch SGD, along with their Polyak-Ruppert averages. Our non-asymptotic analysis establishes novel heuristics that link dependence, biases, and convexity levels, enabling accelerated convergence. Specifically, our findings demonstrate that (i) time-varying mini-batch SGD methods have the capability to break long- and short-range dependence structures, (ii) biased SGD methods can achieve comparable performance to their unbiased counterparts, and (iii) incorporating Polyak-Ruppert averaging can accelerate the convergence of the stochastic optimization algorithms. To validate our theoretical findings, we conduct a series of experiments using both simulated and real-life time-dependent data.
Cite
Text
Godichon-Baggioni et al. "Learning from Time-Dependent Streaming Data with Online Stochastic Algorithms." Transactions on Machine Learning Research, 2023.Markdown
[Godichon-Baggioni et al. "Learning from Time-Dependent Streaming Data with Online Stochastic Algorithms." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/godichonbaggioni2023tmlr-learning/)BibTeX
@article{godichonbaggioni2023tmlr-learning,
title = {{Learning from Time-Dependent Streaming Data with Online Stochastic Algorithms}},
author = {Godichon-Baggioni, Antoine and Werge, Nicklas and Wintenberger, Olivier},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/godichonbaggioni2023tmlr-learning/}
}