KOALA++: Efficient Kalman-Based Optimization with Gradient-Covariance Products

Abstract

We propose KOALA++, a scalable Kalman-based optimization algorithm that explicitly models structured gradient uncertainty in neural network training. Unlike second-order methods, which rely on expensive second order gradient calculation, our method directly estimates the parameter covariance matrix by recursively updating compact gradient covariance products. This design improves upon the original KOALA framework that assumed diagonal covariance by implicitly capturing richer uncertainty structure without storing the full covariance matrix and avoiding large matrix inversions. Across diverse tasks, including image classification and language modeling, KOALA++ achieves accuracy on par or better than state-of-the-art second-order optimizers while maintaining the efficiency of first-order methods.

Cite

Text

Xia et al. "KOALA++: Efficient Kalman-Based Optimization with Gradient-Covariance Products." Advances in Neural Information Processing Systems, 2025.

Markdown

[Xia et al. "KOALA++: Efficient Kalman-Based Optimization with Gradient-Covariance Products." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/xia2025neurips-koala/)

BibTeX

@inproceedings{xia2025neurips-koala,
  title     = {{KOALA++: Efficient Kalman-Based Optimization with Gradient-Covariance Products}},
  author    = {Xia, Zixuan and Davtyan, Aram and Favaro, Paolo},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/xia2025neurips-koala/}
}