The Performance of the Unadjusted Langevin Algorithm Without Smoothness Assumptions
Abstract
In this article, we study the problem of sampling from distributions whose densities are not necessarily smooth nor logconcave. We propose a simple Langevin-based algorithm that does not rely on popular but computationally challenging techniques, such as the Moreau-Yosida envelope or Gaussian smoothing, and show consequently that the performance of samplers like ULA does not necessarily degenerate arbitrarily with low regularity. In particular, we show that the Lipschitz or Hölder continuity assumption can be replaced by a geometric one-sided Lipschitz condition that allows even for discontinuous log-gradients. We derive non-asymptotic guarantees for the convergence of the algorithm to the target distribution in Wasserstein distances. Non-asymptotic bounds are also provided for the performance of the algorithm as an optimizer, specifically for the solution of associated excess risk optimization problems.
Cite
Text
Johnston et al. "The Performance of the Unadjusted Langevin Algorithm Without Smoothness Assumptions." Transactions on Machine Learning Research, 2025.Markdown
[Johnston et al. "The Performance of the Unadjusted Langevin Algorithm Without Smoothness Assumptions." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/johnston2025tmlr-performance/)BibTeX
@article{johnston2025tmlr-performance,
title = {{The Performance of the Unadjusted Langevin Algorithm Without Smoothness Assumptions}},
author = {Johnston, Tim and Lytras, Iosif and Makras, Nikolaos and Sabanis, Sotirios},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/johnston2025tmlr-performance/}
}