Variational Stochastic Gradient Descent for Deep Neural Networks

Abstract

Optimizing deep neural networks is one of the main tasks in successful deep learning. Current state-of-the-art optimizers are adaptive gradient-based optimization methods such as Adam. Recently, there has been an increasing interest in formulating gradient-based optimizers in a probabilistic framework for better modeling the uncertainty of the gradients. Here, we propose to combine both approaches, resulting in the Variational Stochastic Gradient Descent (VSGD) optimizer. We model gradient updates as a probabilistic model and utilize stochastic variational inference (SVI) to derive an efficient and effective update rule. Further, we show how our VSGD method relates to other adaptive gradient-based optimizers like Adam. Lastly, we carry out experiments on two image classification datasets and four deep neural network architectures, where we show that VSGD outperforms Adam and SGD.

Cite

Text

Kuzina et al. "Variational Stochastic Gradient Descent for Deep Neural Networks." Transactions on Machine Learning Research, 2025.

Markdown

[Kuzina et al. "Variational Stochastic Gradient Descent for Deep Neural Networks." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/kuzina2025tmlr-variational/)

BibTeX

@article{kuzina2025tmlr-variational,
  title     = {{Variational Stochastic Gradient Descent for Deep Neural Networks}},
  author    = {Kuzina, Anna and Chen, Haotian and Esmaeili, Babak and Tomczak, Jakub M.},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/kuzina2025tmlr-variational/}
}