Accelerating Stochastic Composition Optimization

Abstract

We consider the stochastic nested composition optimization problem where the objective is a composition of two expected- value functions. We propose a new stochastic first-order method, namely the accelerated stochastic compositional proximal gradient (ASC-PG) method. This algorithm updates the solution based on noisy gradient queries using a two-timescale iteration. The ASC-PG is the first proximal gradient method for the stochastic composition problem that can deal with nonsmooth regularization penalty. We show that the ASC-PG exhibits faster convergence than the best known algorithms, and that it achieves the optimal sample-error complexity in several important special cases. We demonstrate the application of ASC-PG to reinforcement learning and conduct numerical experiments.

Cite

Text

Wang et al. "Accelerating Stochastic Composition Optimization." Journal of Machine Learning Research, 2017.

Markdown

[Wang et al. "Accelerating Stochastic Composition Optimization." Journal of Machine Learning Research, 2017.](https://mlanthology.org/jmlr/2017/wang2017jmlr-accelerating/)

BibTeX

@article{wang2017jmlr-accelerating,
  title     = {{Accelerating Stochastic Composition Optimization}},
  author    = {Wang, Mengdi and Liu, Ji and Fang, Ethan X.},
  journal   = {Journal of Machine Learning Research},
  year      = {2017},
  pages     = {1-23},
  volume    = {18},
  url       = {https://mlanthology.org/jmlr/2017/wang2017jmlr-accelerating/}
}