Scalable Discrete Sampling as a Multi-Armed Bandit Problem
Abstract
Drawing a sample from a discrete distribution is one of the building components for Monte Carlo methods. Like other sampling algorithms, discrete sampling suffers from the high computational burden in large-scale inference problems. We study the problem of sampling a discrete random variable with a high degree of dependency that is typical in large-scale Bayesian inference and graphical models, and propose an efficient approximate solution with a subsampling approach. We make a novel connection between the discrete sampling and Multi-Armed Bandits problems with a finite reward population and provide three algorithms with theoretical guarantees. Empirical evaluations show the robustness and efficiency of the approximate algorithms in both synthetic and real-world large-scale problems.
Cite
Text
Chen and Ghahramani. "Scalable Discrete Sampling as a Multi-Armed Bandit Problem." International Conference on Machine Learning, 2016.Markdown
[Chen and Ghahramani. "Scalable Discrete Sampling as a Multi-Armed Bandit Problem." International Conference on Machine Learning, 2016.](https://mlanthology.org/icml/2016/chen2016icml-scalable/)BibTeX
@inproceedings{chen2016icml-scalable,
title = {{Scalable Discrete Sampling as a Multi-Armed Bandit Problem}},
author = {Chen, Yutian and Ghahramani, Zoubin},
booktitle = {International Conference on Machine Learning},
year = {2016},
pages = {2492-2501},
volume = {48},
url = {https://mlanthology.org/icml/2016/chen2016icml-scalable/}
}