Tracking the Median of Gradients with a Stochastic Proximal Point Method

Abstract

There are several applications of stochastic optimization where one can benefit from a robust estimate of the gradient. For example, domains such as distributed learning with corrupted nodes, the presence of large outliers in the training data, learning under privacy constraints, or even heavy-tailed noise due to the dynamics of the algorithm itself. Here we study SGD with robust gradient estimators based on estimating the median. We first derive iterative methods based on the stochastic proximal point method for computing the median gradient and generalizations thereof. Then we propose an algorithm estimating the median gradient across *iterations*, and find that several well known methods are particular cases of this framework. For instance, we observe that different forms of clipping allow to compute online estimators of the *median* of gradients, in contrast to (heavy-ball) momentum, which corresponds to an online estimator of the *mean*. Finally, we provide a theoretical framework for an algorithm computing the median gradient across *samples*, and show that the resulting method can converge even under heavy-tailed, state-dependent noise.

Cite

Text

Schaipp et al. "Tracking the Median of Gradients with a Stochastic Proximal Point Method." Transactions on Machine Learning Research, 2025.

Markdown

[Schaipp et al. "Tracking the Median of Gradients with a Stochastic Proximal Point Method." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/schaipp2025tmlr-tracking/)

BibTeX

@article{schaipp2025tmlr-tracking,
  title     = {{Tracking the Median of Gradients with a Stochastic Proximal Point Method}},
  author    = {Schaipp, Fabian and Garrigos, Guillaume and Simsekli, Umut and Gower, Robert M.},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/schaipp2025tmlr-tracking/}
}