Chain of Log-Concave Markov Chains

Abstract

We introduce a theoretical framework for sampling from unnormalized densities based on a smoothing scheme that uses an isotropic Gaussian kernel with a single fixed noise scale. We prove one can decompose sampling from a density (minimal assumptions made on the density) into a sequence of sampling from log-concave conditional densities via accumulation of noisy measurements with equal noise levels. Our construction is unique in that it keeps track of a history of samples, making it non-Markovian as a whole, but it is lightweight algorithmically as the history only shows up in the form of a running empirical mean of samples. Our sampling algorithm generalizes walk-jump sampling (Saremi & Hyvärinen, 2019). The "walk" phase becomes a (non-Markovian) chain of (log-concave) Markov chains. The "jump" from the accumulated measurements is obtained by empirical Bayes. We study our sampling algorithm quantitatively using the 2-Wasserstein metric and compare it with various Langevin MCMC algorithms. We also report a remarkable capacity of our algorithm to "tunnel" between modes of a distribution.

Cite

Text

Saremi et al. "Chain of Log-Concave Markov Chains." International Conference on Learning Representations, 2024.

Markdown

[Saremi et al. "Chain of Log-Concave Markov Chains." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/saremi2024iclr-chain/)

BibTeX

@inproceedings{saremi2024iclr-chain,
  title     = {{Chain of Log-Concave Markov Chains}},
  author    = {Saremi, Saeed and Park, Ji Won and Bach, Francis},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/saremi2024iclr-chain/}
}