On Markov Chain Gradient Descent
Abstract
Stochastic gradient methods are the workhorse (algorithms) of large-scale optimization problems in machine learning, signal processing, and other computational sciences and engineering. This paper studies Markov chain gradient descent, a variant of stochastic gradient descent where the random samples are taken on the trajectory of a Markov chain. Existing results of this method assume convex objectives and a reversible Markov chain and thus have their limitations. We establish new non-ergodic convergence under wider step sizes, for nonconvex problems, and for non-reversible finite-state Markov chains. Nonconvexity makes our method applicable to broader problem classes. Non-reversible finite-state Markov chains, on the other hand, can mix substatially faster. To obtain these results, we introduce a new technique that varies the mixing levels of the Markov chains. The reported numerical results validate our contributions.
Cite
Text
Sun et al. "On Markov Chain Gradient Descent." Neural Information Processing Systems, 2018.Markdown
[Sun et al. "On Markov Chain Gradient Descent." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/sun2018neurips-markov/)BibTeX
@inproceedings{sun2018neurips-markov,
title = {{On Markov Chain Gradient Descent}},
author = {Sun, Tao and Sun, Yuejiao and Yin, Wotao},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {9896-9905},
url = {https://mlanthology.org/neurips/2018/sun2018neurips-markov/}
}