Accelerating Stochastic Composition Optimization
Abstract
Consider the stochastic composition optimization problem where the objective is a composition of two expected-value functions. We propose a new stochastic first-order method, namely the accelerated stochastic compositional proximal gradient (ASC-PG) method, which updates based on queries to the sampling oracle using two different timescales. The ASC-PG is the first proximal gradient method for the stochastic composition problem that can deal with nonsmooth regularization penalty. We show that the ASC-PG exhibits faster convergence than the best known algorithms, and that it achieves the optimal sample-error complexity in several important special cases. We further demonstrate the application of ASC-PG to reinforcement learning and conduct numerical experiments.
Cite
Text
Wang et al. "Accelerating Stochastic Composition Optimization." Neural Information Processing Systems, 2016.Markdown
[Wang et al. "Accelerating Stochastic Composition Optimization." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/wang2016neurips-accelerating/)BibTeX
@inproceedings{wang2016neurips-accelerating,
title = {{Accelerating Stochastic Composition Optimization}},
author = {Wang, Mengdi and Liu, Ji and Fang, Ethan},
booktitle = {Neural Information Processing Systems},
year = {2016},
pages = {1714-1722},
url = {https://mlanthology.org/neurips/2016/wang2016neurips-accelerating/}
}