Radio: Rate–Distortion Optimization for Large Language Model Compression

Abstract

In recent years, the compression of large language models (LLMs) has emerged as a key problem in facilitating LLM deployment on resource-limited devices, reducing compute costs, and mitigating the environmental footprint due to large-scale AI infrastructure. Here, we establish the foundations of LLM quantization from a rate–distortion theory perspective and propose a quantization technique based on simple rate–distortion optimization. Our technique scales to models containing hundreds of billions of weight parameters and offers users the flexibility to compress models, post-training, to a model size or accuracy specified by the user.

Cite

Text

Young. "Radio: Rate–Distortion Optimization for Large Language Model Compression." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Young. "Radio: Rate–Distortion Optimization for Large Language Model Compression." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/young2025icml-radio/)

BibTeX

@inproceedings{young2025icml-radio,
  title     = {{Radio: Rate–Distortion Optimization for Large Language Model Compression}},
  author    = {Young, Sean I.},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {72819-72836},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/young2025icml-radio/}
}