Dimension-Free Convergence Rates for Gradient Langevin Dynamics in RKHS
Abstract
Gradient Langevin dynamics (GLD) and stochastic GLD (SGLD) have attracted considerable attention lately, as a way to provide convergence guarantees in a non-convex setting. However, the known rates grow exponentially with the dimension of the space under the dissipative condition. In this work, we provide a convergence analysis of GLD and SGLD when the optimization space is an infinite-dimensional Hilbert space. More precisely, we derive non-asymptotic, dimension-free convergence rates for GLD/SGLD when performing regularized non-convex optimization in a reproducing kernel Hilbert space. Amongst others, the convergence analysis relies on the properties of a stochastic differential equation, its discrete time Galerkin approximation and the geometric ergodicity of the associated Markov chains.
Cite
Text
Muzellec et al. "Dimension-Free Convergence Rates for Gradient Langevin Dynamics in RKHS." Conference on Learning Theory, 2022.Markdown
[Muzellec et al. "Dimension-Free Convergence Rates for Gradient Langevin Dynamics in RKHS." Conference on Learning Theory, 2022.](https://mlanthology.org/colt/2022/muzellec2022colt-dimensionfree/)BibTeX
@inproceedings{muzellec2022colt-dimensionfree,
title = {{Dimension-Free Convergence Rates for Gradient Langevin Dynamics in RKHS}},
author = {Muzellec, Boris and Sato, Kanji and Massias, Mathurin and Suzuki, Taiji},
booktitle = {Conference on Learning Theory},
year = {2022},
pages = {1356-1420},
volume = {178},
url = {https://mlanthology.org/colt/2022/muzellec2022colt-dimensionfree/}
}