Optimization Guarantees for Square-Root Natural-Gradient Variational Inference
Abstract
Variational inference with natural-gradient descent often shows fast convergence in practice, but its theoretical convergence guarantees have been challenging to establish. This is true even for the simplest cases that involve concave log-likelihoods and use a Gaussian approximation. We show that the challenge can be circumvented for such cases using a square-root parameterization for the Gaussian covariance. This approach establishes novel convergence guarantees for natural-gradient variational-Gaussian inference and its continuous-time gradient flow. Our experiments demonstrate the effectiveness of natural gradient methods and highlight their advantages over algorithms that use Euclidean or Wasserstein geometries.
Cite
Text
Kumar et al. "Optimization Guarantees for Square-Root Natural-Gradient Variational Inference." Transactions on Machine Learning Research, 2025.Markdown
[Kumar et al. "Optimization Guarantees for Square-Root Natural-Gradient Variational Inference." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/kumar2025tmlr-optimization/)BibTeX
@article{kumar2025tmlr-optimization,
title = {{Optimization Guarantees for Square-Root Natural-Gradient Variational Inference}},
author = {Kumar, Navish and Möllenhoff, Thomas and Khan, Mohammad Emtiyaz and Lucchi, Aurelien},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/kumar2025tmlr-optimization/}
}