U-Statistics for Importance-Weighted Variational Inference
Abstract
We propose the use of U-statistics to reduce variance for gradient estimation in importance-weighted variational inference. The key observation is that, given a base gradient estimator that requires $m > 1$ samples and a total of $n > m$ samples to be used for estimation, lower variance is achieved by averaging the base estimator on overlapping batches of size $m$ than disjoint batches, as currently done. We use classical U-statistic theory to analyze the variance reduction, and propose novel approximations with theoretical guarantees to ensure computational efficiency. We find empirically that U-statistic variance reduction can lead to modest to significant improvements in inference performance on a range of models, with little computational cost.
Cite
Text
Burroni et al. "U-Statistics for Importance-Weighted Variational Inference." Transactions on Machine Learning Research, 2023.Markdown
[Burroni et al. "U-Statistics for Importance-Weighted Variational Inference." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/burroni2023tmlr-ustatistics/)BibTeX
@article{burroni2023tmlr-ustatistics,
title = {{U-Statistics for Importance-Weighted Variational Inference}},
author = {Burroni, Javier and Takatsu, Kenta and Domke, Justin and Sheldon, Daniel},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/burroni2023tmlr-ustatistics/}
}