Penalized Langevin Dynamics with Vanishing Penalty for Smooth and Log-Concave Targets

Abstract

We study the problem of sampling from a probability distribution on $\mathbb R^p$ defined via a convex and smooth potential function. We first consider a continuous-time diffusion-type process, termed Penalized Langevin dynamics (PLD), the drift of which is the negative gradient of the potential plus a linear penalty that vanishes when time goes to infinity. An upper bound on the Wasserstein-2 distance between the distribution of the PLD at time $t$ and the target is established. This upper bound highlights the influence of the speed of decay of the penalty on the accuracy of approximation. As a consequence, in the case of low-temperature limit we infer a new result on the convergence of the penalized gradient flow for the optimization problem.

Cite

Text

Karagulyan and Dalalyan. "Penalized Langevin Dynamics with Vanishing Penalty  for Smooth and Log-Concave Targets." Neural Information Processing Systems, 2020.

Markdown

[Karagulyan and Dalalyan. "Penalized Langevin Dynamics with Vanishing Penalty  for Smooth and Log-Concave Targets." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/karagulyan2020neurips-penalized/)

BibTeX

@inproceedings{karagulyan2020neurips-penalized,
  title     = {{Penalized Langevin Dynamics with Vanishing Penalty  for Smooth and Log-Concave Targets}},
  author    = {Karagulyan, Avetik and Dalalyan, Arnak},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/karagulyan2020neurips-penalized/}
}