A Langevin-like Sampler for Discrete Distributions
Abstract
We propose discrete Langevin proposal (DLP), a simple and scalable gradient-based proposal for sampling complex high-dimensional discrete distributions. In contrast to Gibbs sampling-based methods, DLP is able to update all coordinates in parallel in a single step and the magnitude of changes is controlled by a stepsize. This allows a cheap and efficient exploration in the space of high-dimensional and strongly correlated variables. We prove the efficiency of DLP by showing that the asymptotic bias of its stationary distribution is zero for log-quadratic distributions, and is small for distributions that are close to being log-quadratic. With DLP, we develop several variants of sampling algorithms, including unadjusted, Metropolis-adjusted, stochastic and preconditioned versions. DLP outperforms many popular alternatives on a wide variety of tasks, including Ising models, restricted Boltzmann machines, deep energy-based models, binary neural networks and language generation.
Cite
Text
Zhang et al. "A Langevin-like Sampler for Discrete Distributions." International Conference on Machine Learning, 2022.Markdown
[Zhang et al. "A Langevin-like Sampler for Discrete Distributions." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/zhang2022icml-langevinlike/)BibTeX
@inproceedings{zhang2022icml-langevinlike,
title = {{A Langevin-like Sampler for Discrete Distributions}},
author = {Zhang, Ruqi and Liu, Xingchao and Liu, Qiang},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {26375-26396},
volume = {162},
url = {https://mlanthology.org/icml/2022/zhang2022icml-langevinlike/}
}