Sampling from Log-Concave Distributions with Infinity-Distance Guarantees
Abstract
For a $d$-dimensional log-concave distribution $\pi(\theta) \propto e^{-f(\theta)}$ constrained to a convex body $K$, the problem of outputting samples from a distribution $\nu$ which is $\varepsilon$-close in infinity-distance $\sup_{\theta \in K} |\log \frac{\nu(\theta)}{\pi(\theta)}|$ to $\pi$ arises in differentially private optimization. While sampling within total-variation distance $\varepsilon$ of $\pi$ can be done by algorithms whose runtime depends polylogarithmically on $\frac{1}{\varepsilon}$, prior algorithms for sampling in $\varepsilon$ infinity distance have runtime bounds that depend polynomially on $\frac{1}{\varepsilon}$. We bridge this gap by presenting an algorithm that outputs a point $\varepsilon$-close to $\pi$ in infinity distance that requires at most $\mathrm{poly}(\log \frac{1}{\varepsilon}, d)$ calls to a membership oracle for $K$ and evaluation oracle for $f$, when $f$ is Lipschitz. Our approach departs from prior works that construct Markov chains on a $\frac{1}{\varepsilon^2}$-discretization of $K$ to achieve a sample with $\varepsilon$ infinity-distance error, and present a method to directly convert continuous samples from $K$ with total-variation bounds to samples with infinity bounds. This approach also allows us to obtain an improvement on the dimension $d$ in the running time for the problem of sampling from a log-concave distribution on polytopes $K$ with infinity distance $\varepsilon$, by plugging in TV-distance running time bounds for the Dikin Walk Markov chain.
Cite
Text
Mangoubi and Vishnoi. "Sampling from Log-Concave Distributions with Infinity-Distance Guarantees." Neural Information Processing Systems, 2022.Markdown
[Mangoubi and Vishnoi. "Sampling from Log-Concave Distributions with Infinity-Distance Guarantees." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/mangoubi2022neurips-sampling/)BibTeX
@inproceedings{mangoubi2022neurips-sampling,
title = {{Sampling from Log-Concave Distributions with Infinity-Distance Guarantees}},
author = {Mangoubi, Oren and Vishnoi, Nisheeth},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/mangoubi2022neurips-sampling/}
}