Extreme Tensoring for Low-Memory Preconditioning

Abstract

State-of-the-art models are now trained with billions of parameters, reaching hardware limits in terms of memory consumption. This has created a recent demand for memory-efficient optimizers. To this end, we investigate the limits and performance tradeoffs of memory-efficient adaptively preconditioned gradient methods. We propose \emph{extreme tensoring} for high-dimensional stochastic optimization, showing that an optimizer needs very little memory to benefit from adaptive preconditioning. Our technique applies to arbitrary models (not necessarily with tensor-shaped parameters), and is accompanied by regret and convergence guarantees, which shed light on the tradeoffs between preconditioner quality and expressivity. On a large-scale NLP model, we reduce the optimizer memory overhead by three orders of magnitude, without degrading performance.

Cite

Text

Chen et al. "Extreme Tensoring for Low-Memory Preconditioning." International Conference on Learning Representations, 2020.

Markdown

[Chen et al. "Extreme Tensoring for Low-Memory Preconditioning." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/chen2020iclr-extreme/)

BibTeX

@inproceedings{chen2020iclr-extreme,
  title     = {{Extreme Tensoring for Low-Memory Preconditioning}},
  author    = {Chen, Xinyi and Agarwal, Naman and Hazan, Elad and Zhang, Cyril and Zhang, Yi},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/chen2020iclr-extreme/}
}