AxlePro: Momentum-Accelerated Batched Training of Kernel Machines

Abstract

In this paper we derive a novel iterative algorithm for learning kernel machines. Our algorithm, $\textsf{AxlePro}$, extends the $\textsf{EigenPro}$ family of algorithms via momentum-based acceleration. $\textsf{AxlePro}$ can be applied to train kernel machines with arbitrary positive semidefinite kernels. We provide a convergence guarantee for the algorithm and demonstrate the speed-up of $\textsf{AxlePro}$ over competing algorithms via numerical experiments. Furthermore, we also derive a version of $\textsf{AxlePro}$ to train large kernel models over arbitrarily large datasets.

Cite

Text

Zhang and Pandit. "AxlePro: Momentum-Accelerated Batched Training of Kernel Machines." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.

Markdown

[Zhang and Pandit. "AxlePro: Momentum-Accelerated Batched Training of Kernel Machines." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.](https://mlanthology.org/aistats/2025/zhang2025aistats-axlepro/)

BibTeX

@inproceedings{zhang2025aistats-axlepro,
  title     = {{AxlePro: Momentum-Accelerated Batched Training of Kernel Machines}},
  author    = {Zhang, Yiming and Pandit, Parthe},
  booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics},
  year      = {2025},
  pages     = {1666-1674},
  volume    = {258},
  url       = {https://mlanthology.org/aistats/2025/zhang2025aistats-axlepro/}
}