The Mirror Langevin Algorithm Converges with Vanishing Bias

Abstract

The technique of modifying the geometry of a problem from Euclidean to Hessian metric has proved to be quite effective in optimization, and has been the subject of study for sampling. The Mirror Langevin Diffusion (MLD) is a sampling analogue of mirror flow in continuous time, and it has nice convergence properties under log-Sobolev or Poincare inequalities relative to the Hessian metric. In discrete time, a simple discretization of MLD is the Mirror Langevin Algorithm (MLA), which was shown to have a biased convergence guarantee with a non-vanishing bias term (does not go to zero as step size goes to zero). This raised the question of whether we need a better analysis or a better discretization to achieve a vanishing bias. Here we study the Mirror Langevin Algorithm and show it indeed has a vanishing bias. We apply mean-square analysis to show the mixing time bound for MLA under the modified self-concordance condition.

Cite

Text

Li et al. "The Mirror Langevin Algorithm Converges with Vanishing Bias." Proceedings of The 33rd International Conference on Algorithmic Learning Theory, 2022.

Markdown

[Li et al. "The Mirror Langevin Algorithm Converges with Vanishing Bias." Proceedings of The 33rd International Conference on Algorithmic Learning Theory, 2022.](https://mlanthology.org/alt/2022/li2022alt-mirror/)

BibTeX

@inproceedings{li2022alt-mirror,
  title     = {{The Mirror Langevin Algorithm Converges with Vanishing Bias}},
  author    = {Li, Ruilin and Tao, Molei and Vempala, Santosh S. and Wibisono, Andre},
  booktitle = {Proceedings of The 33rd International Conference on Algorithmic Learning Theory},
  year      = {2022},
  pages     = {718-742},
  volume    = {167},
  url       = {https://mlanthology.org/alt/2022/li2022alt-mirror/}
}