Bridging the Gap Between Stochastic Gradient MCMC and Stochastic Optimization

Abstract

Stochastic gradient Markov chain Monte Carlo (SG-MCMC) methods are Bayesian analogs to popular stochastic optimization methods; however, this connection is not well studied. We explore this relationship by applying simulated annealing to an SGMCMC algorithm. Furthermore, we extend recent SG-MCMC methods with two key components: i) adaptive preconditioners (as in ADAgrad or RMSprop), and ii) adaptive element-wise momentum weights. The zero-temperature limit gives a novel stochastic optimization method with adaptive element-wise momentum weights, while conventional optimization methods only have a shared, static momentum weight. Under certain assumptions, our theoretical analysis suggests the proposed simulated annealing approach converges close to the global optima. Experiments on several deep neural network models show state-of-the-art results compared to related stochastic optimization algorithms.

Cite

Text

Chen et al. "Bridging the Gap Between Stochastic Gradient MCMC and Stochastic Optimization." International Conference on Artificial Intelligence and Statistics, 2016.

Markdown

[Chen et al. "Bridging the Gap Between Stochastic Gradient MCMC and Stochastic Optimization." International Conference on Artificial Intelligence and Statistics, 2016.](https://mlanthology.org/aistats/2016/chen2016aistats-bridging/)

BibTeX

@inproceedings{chen2016aistats-bridging,
  title     = {{Bridging the Gap Between Stochastic Gradient MCMC and Stochastic Optimization}},
  author    = {Chen, Changyou and Carlson, David E. and Gan, Zhe and Li, Chunyuan and Carin, Lawrence},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2016},
  pages     = {1051-1060},
  url       = {https://mlanthology.org/aistats/2016/chen2016aistats-bridging/}
}