Compressing Large Language Models Using Low Rank and Low Precision Decomposition

Abstract

The prohibitive sizes of Large Language Models (LLMs) today make it difficult to deploy them on memory-constrained edge devices. This work introduces $\rm CALDERA$ -- a new post-training LLM compression algorithm that harnesses the inherent low-rank structure of a weight matrix $\mathbf{W}$ by approximating it via a low-rank, low-precision decomposition as $\mathbf{W} \approx \mathbf{Q} + \mathbf{L}\mathbf{R}$. Here, $\mathbf{L}$ and $\mathbf{R}$ are low rank factors, and the entries of $\mathbf{Q}$, $\mathbf{L}$ and $\mathbf{R}$ are quantized. The model is compressed by substituting each layer with its $\mathbf{Q} + \mathbf{L}\mathbf{R}$ decomposition, and the zero-shot performance of the compressed model is evaluated. Additionally, $\mathbf{L}$ and $\mathbf{R}$ are readily amenable to low-rank adaptation, consequently enhancing the zero-shot performance. $\rm CALDERA$ obtains this decomposition by formulating it as an optimization problem $\min_{\mathbf{Q},\mathbf{L},\mathbf{R}}\lVert(\mathbf{Q} + \mathbf{L}\mathbf{R} - \mathbf{W})\mathbf{X}^\top\rVert_{\rm F}^2$, where $\mathbf{X}$ is the calibration data, and $\mathbf{Q}, \mathbf{L}, \mathbf{R}$ are constrained to be representable using low-precision formats. Theoretical upper bounds on the approximation error of $\rm CALDERA$ are established using a rank-constrained regression framework, and the tradeoff between compression ratio and model performance is studied by analyzing the impact of target rank and quantization bit budget. Results illustrate that compressing LlaMa-$2$ $7$B/$13$B/$70$B and LlaMa-$3$ $8$B models obtained using $\rm CALDERA$ outperforms existing post-training LLM compression techniques in the regime of less than $2.5$ bits per parameter.

Cite

Text

Saha et al. "Compressing Large Language Models Using Low Rank and Low Precision Decomposition." Neural Information Processing Systems, 2024. doi:10.52202/079017-2823

Markdown

[Saha et al. "Compressing Large Language Models Using Low Rank and Low Precision Decomposition." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/saha2024neurips-compressing/) doi:10.52202/079017-2823

BibTeX

@inproceedings{saha2024neurips-compressing,
  title     = {{Compressing Large Language Models Using Low Rank and Low Precision Decomposition}},
  author    = {Saha, Rajarshi and Sagan, Naomi and Srivastava, Varun and Goldsmith, Andrea J. and Pilanci, Mert},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2823},
  url       = {https://mlanthology.org/neurips/2024/saha2024neurips-compressing/}
}