Norm Tweaking: High-Performance Low-Bit Quantization of Large Language Models

Cite

Text

Li et al. "Norm Tweaking: High-Performance Low-Bit Quantization of Large Language Models." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I17.29815

Markdown

[Li et al. "Norm Tweaking: High-Performance Low-Bit Quantization of Large Language Models." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/li2024aaai-norm/) doi:10.1609/AAAI.V38I17.29815

BibTeX

@inproceedings{li2024aaai-norm,
  title     = {{Norm Tweaking: High-Performance Low-Bit Quantization of Large Language Models}},
  author    = {Li, Liang and Li, Qingyuan and Zhang, Bo and Chu, Xiangxiang},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {18536-18544},
  doi       = {10.1609/AAAI.V38I17.29815},
  url       = {https://mlanthology.org/aaai/2024/li2024aaai-norm/}
}