CodeGEMM: A Codebook-Centric Approach to Efficient GEMM in Quantized LLMs
Abstract
Weight-only quantization is widely used to mitigate the memory-bound nature of LLM inference. Codebook-based methods extend this trend by achieving strong accuracy in the extremely low-bit regime (e.g., 2-bit). However, current kernels rely on dequantization, which repeatedly fetches centroids and reconstructs weights, incurring substantial latency and cache pressure. We present CodeGEMM, a codebook-centric GEMM kernel that replaces dequantization with precomputed inner products between centroids and activations stored in a lightweight Psumbook. At inference, code indices directly gather these partial sums, eliminating per-element lookups and reducing the on-chip footprint. The kernel supports the systematic exploration of latency–memory–accuracy trade-offs under a unified implementation. On Llama-3 models, CodeGEMM delivers 1.83x (8B) and 8.93x (70B) speedups in the 2-bit configuration compared to state-of-the-art codebook-based quantization at comparable accuracy and further improves computing efficiency and memory subsystem utilization.
Cite
Text
Park et al. "CodeGEMM: A Codebook-Centric Approach to Efficient GEMM in Quantized LLMs." Advances in Neural Information Processing Systems, 2025.Markdown
[Park et al. "CodeGEMM: A Codebook-Centric Approach to Efficient GEMM in Quantized LLMs." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/park2025neurips-codegemm/)BibTeX
@inproceedings{park2025neurips-codegemm,
title = {{CodeGEMM: A Codebook-Centric Approach to Efficient GEMM in Quantized LLMs}},
author = {Park, Gunho and Bae, Jeongin and Kim, Byeongwook and Park, Baeseong and Ryu, Jiwon and Kim, Hoseung and Kwon, Se Jung and Lee, Dongsoo},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/park2025neurips-codegemm/}
}