Rao-Blackwellized Stochastic Gradients for Discrete Distributions
Abstract
We wish to compute the gradient of an expectation over a finite or countably infinite sample space having K $\leq$ $\infty$ categories. When K is indeed infinite, or finite but very large, the relevant summation is intractable. Accordingly, various stochastic gradient estimators have been proposed. In this paper, we describe a technique that can be applied to reduce the variance of any such estimator, without changing its bias{—}in particular, unbiasedness is retained. We show that our technique is an instance of Rao-Blackwellization, and we demonstrate the improvement it yields on a semi-supervised classification problem and a pixel attention task.
Cite
Text
Liu et al. "Rao-Blackwellized Stochastic Gradients for Discrete Distributions." International Conference on Machine Learning, 2019.Markdown
[Liu et al. "Rao-Blackwellized Stochastic Gradients for Discrete Distributions." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/liu2019icml-raoblackwellized/)BibTeX
@inproceedings{liu2019icml-raoblackwellized,
title = {{Rao-Blackwellized Stochastic Gradients for Discrete Distributions}},
author = {Liu, Runjing and Regier, Jeffrey and Tripuraneni, Nilesh and Jordan, Michael and Mcauliffe, Jon},
booktitle = {International Conference on Machine Learning},
year = {2019},
pages = {4023-4031},
volume = {97},
url = {https://mlanthology.org/icml/2019/liu2019icml-raoblackwellized/}
}