QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs

Abstract

We introduce QuaRot, a new Quantization scheme based on Rotations, which is able to quantize LLMs end-to-end, including all weights, activations, and KV cache in 4 bits. QuaRot rotates LLMs in a way that removes outliers from the hidden state without changing the output, making quantization easier. This computational invariance is applied to the hidden state (residual) of the LLM, as well as to the activations of the feed-forward components, aspects of the attention mechanism, and to the KV cache. The result is a quantized model where all matrix multiplications are performed in 4 bits, without any channels identified for retention in higher precision. Our 4-bit quantized LLAMA2-70B model has losses of at most 0.47 WikiText-2 perplexity and retains 99% of the zero-shot performance. We also show that QuaRot can provide lossless 6 and 8 bit LLAMA-2 models without any calibration data using round-to-nearest quantization. Code is available at github.com/spcl/QuaRot.

Cite

Text

Ashkboos et al. "QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs." Neural Information Processing Systems, 2024. doi:10.52202/079017-3180

Markdown

[Ashkboos et al. "QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/ashkboos2024neurips-quarot/) doi:10.52202/079017-3180

BibTeX

@inproceedings{ashkboos2024neurips-quarot,
  title     = {{QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs}},
  author    = {Ashkboos, Saleh and Mohtashami, Amirkeivan and Croci, Maximilian L. and Li, Bo and Cameron, Pashmina and Jaggi, Martin and Alistarh, Dan and Hoefler, Torsten and Hensman, James},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-3180},
  url       = {https://mlanthology.org/neurips/2024/ashkboos2024neurips-quarot/}
}