Extreme Compression of Large Language Models via Additive Quantization

Abstract

The emergence of accurate open large language models (LLMs) has led to a race towards performant quantization techniques which can enable their execution on end-user devices. In this paper, we revisit the problem of “extreme” LLM compression—defined as targeting extremely low bit counts, such as 2 to 3 bits per parameter—from the point of view of classic methods in Multi-Codebook Quantization (MCQ). Our algorithm, called AQLM, generalizes the classic Additive Quantization (AQ) approach for information retrieval to advance the state-of-the-art in LLM compression, via two innovations: 1) learned additive quantization of weight matrices in input-adaptive fashion, and 2) joint optimization of codebook parameters across each transformer blocks. Broadly, AQLM is the first scheme that is Pareto optimal in terms of accuracy-vs-model-size when compressing to less than 3 bits per parameter, and significantly improves upon all known schemes in the extreme compression (2bit) regime. In addition, AQLM is practical: we provide fast GPU and CPU implementations of AQLM for token generation, which enable us to match or outperform optimized FP16 implementations for speed, while executing in a much smaller memory footprint.

Cite

Text

Egiazarian et al. "Extreme Compression of Large Language Models via Additive Quantization." International Conference on Machine Learning, 2024.

Markdown

[Egiazarian et al. "Extreme Compression of Large Language Models via Additive Quantization." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/egiazarian2024icml-extreme/)

BibTeX

@inproceedings{egiazarian2024icml-extreme,
  title     = {{Extreme Compression of Large Language Models via Additive Quantization}},
  author    = {Egiazarian, Vage and Panferov, Andrei and Kuznedelev, Denis and Frantar, Elias and Babenko, Artem and Alistarh, Dan},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {12284-12303},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/egiazarian2024icml-extreme/}
}