Joint Control Variate for Faster Black-Box Variational Inference
Abstract
Black-box variational inference performance is sometimes hindered by the use of gradient estimators with high variance. This variance comes from two sources of randomness: Data subsampling and Monte Carlo sampling. While existing control variates only address Monte Carlo noise, and incremental gradient methods typically only address data subsampling, we propose a new "joint" control variate that jointly reduces variance from both sources of noise. This significantly reduces gradient variance, leading to faster optimization in several applications.
Cite
Text
Wang et al. "Joint Control Variate for Faster Black-Box Variational Inference." Artificial Intelligence and Statistics, 2024.Markdown
[Wang et al. "Joint Control Variate for Faster Black-Box Variational Inference." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/wang2024aistats-joint/)BibTeX
@inproceedings{wang2024aistats-joint,
title = {{Joint Control Variate for Faster Black-Box Variational Inference}},
author = {Wang, Xi and Geffner, Tomas and Domke, Justin},
booktitle = {Artificial Intelligence and Statistics},
year = {2024},
pages = {1639-1647},
volume = {238},
url = {https://mlanthology.org/aistats/2024/wang2024aistats-joint/}
}