Tensor-GaLore: Memory-Efficient Training via Gradient Tensor Decomposition

Abstract

We present Tensor-GaLore, a novel method for efficient training of neural networks with higher-order tensor weights. Many models, particularly those used in scientific computing and computer vision, employ tensor-parameterized layers to capture complex, high-dimensional relationships. However, these tensor structures lead to significant memory requirements during training. Our method addresses this memory challenge through low-rank subspace optimization using Tucker decomposition, overcoming limitations of previous approaches restricted to matrix-parameterized weights, including those operating on complex-valued data. We showcase its effectiveness on Fourier Neural Operators (FNOs), a class of models crucial for solving partial differential equations. Across various PDE tasks, we achieved performance gains ranging from 11\% to 50\% better generalization while reducing optimizer memory usage by up to 76\%. These consistent improvements, coupled with substantial memory savings across AI for science, demonstrate Tensor-GaLore's potential.

Cite

Text

George et al. "Tensor-GaLore: Memory-Efficient Training via Gradient Tensor Decomposition." NeurIPS 2024 Workshops: OPT, 2024.

Markdown

[George et al. "Tensor-GaLore: Memory-Efficient Training via Gradient Tensor Decomposition." NeurIPS 2024 Workshops: OPT, 2024.](https://mlanthology.org/neuripsw/2024/george2024neuripsw-tensorgalore/)

BibTeX

@inproceedings{george2024neuripsw-tensorgalore,
  title     = {{Tensor-GaLore: Memory-Efficient Training via Gradient Tensor Decomposition}},
  author    = {George, Robert Joseph and Pitt, David and Zhao, Jiawei and Kossaifi, Jean and Luo, Cheng and Tian, Yuandong and Anandkumar, Anima},
  booktitle = {NeurIPS 2024 Workshops: OPT},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/george2024neuripsw-tensorgalore/}
}