Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence
Abstract
Contrastive Divergence (CD) and Persistent Contrastive Divergence (PCD) are popular methods for training the weights of Restricted Boltzmann Machines. However, both methods use an approximate method for sampling from the model distribution. As a side effect, these approximations yield significantly different biases and variances for stochastic gradient estimates of individual data points. It is well known that CD yields a biased gradient estimate. In this paper we however show empirically that CD has a lower stochastic gradient estimate variance than exact sampling, while the mean of subsequent PCD estimates has a higher variance than exact sampling. The results give one explanation to the finding that CD can be used with smaller minibatches or higher learning rates than PCD.
Cite
Text
Berglund and Raiko. "Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence." International Conference on Learning Representations, 2014.Markdown
[Berglund and Raiko. "Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence." International Conference on Learning Representations, 2014.](https://mlanthology.org/iclr/2014/berglund2014iclr-stochastic/)BibTeX
@inproceedings{berglund2014iclr-stochastic,
title = {{Stochastic Gradient Estimate Variance in Contrastive Divergence and Persistent Contrastive Divergence}},
author = {Berglund, Mathias and Raiko, Tapani},
booktitle = {International Conference on Learning Representations},
year = {2014},
url = {https://mlanthology.org/iclr/2014/berglund2014iclr-stochastic/}
}