HBLLM: Wavelet-Enhanced High-Fidelity 1-Bit Quantization for LLMs
Abstract
We introduce HBLLM, a wavelet-enhanced high-fidelity $1$-bit post-training quantization method for Large Language Models (LLMs). By leveraging Haar wavelet transforms to enhance expressive capacity through frequency decomposition, HBLLM significantly improves quantization fidelity while maintaining minimal overhead. This approach features two innovative structure-aware grouping strategies: (1) frequency-aware multi-parameter intra-row grouping and (2) $\ell_2$-norm-based saliency-driven column selection. For non-salient weights, a shared mean is employed across quantization groups within each frequency band to optimize storage efficiency. Experiments conducted on the OPT and LLaMA models demonstrate that HBLLM achieves state-of-the-art performance in $1$-bit quantization, attaining a perplexity of $6.71$ on LLaMA$2$-$13$B with an average weight storage of only $1.08$ bits. Code available at: https://github.com/Yeyke/HBLLM.
Cite
Text
Chen et al. "HBLLM: Wavelet-Enhanced High-Fidelity 1-Bit Quantization for LLMs." Advances in Neural Information Processing Systems, 2025.Markdown
[Chen et al. "HBLLM: Wavelet-Enhanced High-Fidelity 1-Bit Quantization for LLMs." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/chen2025neurips-hbllm/)BibTeX
@inproceedings{chen2025neurips-hbllm,
title = {{HBLLM: Wavelet-Enhanced High-Fidelity 1-Bit Quantization for LLMs}},
author = {Chen, Ningning and Ye, Weicai and Jiang, Ying},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/chen2025neurips-hbllm/}
}