Three Operator Splitting with Subgradients, Stochastic Gradients, and Adaptive Learning Rates

Abstract

Three Operator Splitting (TOS) (Davis & Yin, 2017) can minimize the sum of multiple convex functions effectively when an efficient gradient oracle or proximal operator is available for each term. This requirement often fails in machine learning applications: (i) instead of full gradients only stochastic gradients may be available; and (ii) instead of proximal operators, using subgradients to handle complex penalty functions may be more efficient and realistic. Motivated by these concerns, we analyze three potentially valuable extensions of TOS. The first two permit using subgradients and stochastic gradients, and are shown to ensure a $\mathcal{O}(1/\sqrt{t})$ convergence rate. The third extension AdapTOS endows TOS with adaptive step-sizes. For the important setting of optimizing a convex loss over the intersection of convex sets AdapTOS attains universal convergence rates, i.e., the rate adapts to the unknown smoothness degree of the objective. We compare our proposed methods with competing methods on various applications.

Cite

Text

Yurtsever et al. "Three Operator Splitting with Subgradients, Stochastic Gradients, and Adaptive Learning Rates." Neural Information Processing Systems, 2021.

Markdown

[Yurtsever et al. "Three Operator Splitting with Subgradients, Stochastic Gradients, and Adaptive Learning Rates." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/yurtsever2021neurips-three/)

BibTeX

@inproceedings{yurtsever2021neurips-three,
  title     = {{Three Operator Splitting with Subgradients, Stochastic Gradients, and Adaptive Learning Rates}},
  author    = {Yurtsever, Alp and Gu, Alex and Sra, Suvrit},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/yurtsever2021neurips-three/}
}