Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee
Abstract
This paper focuses on the high-dimensional sampling of log-concave distributions with composite structures: $p^*(\mathrm{d}x)\propto \exp(-g(x)-f(x))\mathrm{d}x$. We develop a double randomization technique, which leads to a fast underdamped Langevin algorithm with a dimension-independent convergence guarantee. We prove that the algorithm enjoys an overall $\tilde{\mathcal{O}}\left(\frac{\left(\mathrm{tr}(H)\right)^{1/3}}{\epsilon^{2/3}}\right)$ iteration complexity to reach an $\epsilon$-tolerated sample whose distribution $p$ admits $W_2(p,p^*)\leq \epsilon$. Here, $H$ is an upper bound of the Hessian matrices for $f$ and does not explicitly depend on dimension $d$. For the posterior sampling over linear models with normalized data, we show a clear superiority of convergence rate which is dimension-free and outperforms the previous best-known results by a $d^{1/3}$ factor. The analysis to achieve a faster convergence rate brings new insights into high-dimensional sampling.
Cite
Text
Liu et al. "Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee." Neural Information Processing Systems, 2023.Markdown
[Liu et al. "Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/liu2023neurips-double/)BibTeX
@inproceedings{liu2023neurips-double,
title = {{Double Randomized Underdamped Langevin with Dimension-Independent Convergence Guarantee}},
author = {Liu, Yuanshi and Fang, Cong and Zhang, Tong},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/liu2023neurips-double/}
}