Efficient Full-Matrix Adaptive Regularization

Abstract

Adaptive regularization methods pre-multiply a descent direction by a preconditioning matrix. Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive. We show how to modify full-matrix adaptive regularization in order to make it practical and effective. We also provide a novel theoretical analysis for adaptive regularization in non-convex optimization settings. The core of our algorithm, termed GGT, consists of the efficient computation of the inverse square root of a low-rank matrix. Our preliminary experiments show improved iteration-wise convergence rates across synthetic tasks and standard deep learning benchmarks, and that the more carefully-preconditioned steps sometimes lead to a better solution.

Cite

Text

Agarwal et al. "Efficient Full-Matrix Adaptive Regularization." International Conference on Machine Learning, 2019.

Markdown

[Agarwal et al. "Efficient Full-Matrix Adaptive Regularization." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/agarwal2019icml-efficient/)

BibTeX

@inproceedings{agarwal2019icml-efficient,
  title     = {{Efficient Full-Matrix Adaptive Regularization}},
  author    = {Agarwal, Naman and Bullins, Brian and Chen, Xinyi and Hazan, Elad and Singh, Karan and Zhang, Cyril and Zhang, Yi},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {102-110},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/agarwal2019icml-efficient/}
}