RILQ: Rank-Insensitive LoRA-Based Quantization Error Compensation for Boosting 2-Bit Large Language Model Accuracy

Cite

Text

Lee et al. "RILQ: Rank-Insensitive LoRA-Based Quantization Error Compensation for Boosting 2-Bit Large Language Model Accuracy." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I17.33990

Markdown

[Lee et al. "RILQ: Rank-Insensitive LoRA-Based Quantization Error Compensation for Boosting 2-Bit Large Language Model Accuracy." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/lee2025aaai-rilq/) doi:10.1609/AAAI.V39I17.33990

BibTeX

@inproceedings{lee2025aaai-rilq,
  title     = {{RILQ: Rank-Insensitive LoRA-Based Quantization Error Compensation for Boosting 2-Bit Large Language Model Accuracy}},
  author    = {Lee, Geonho and Lee, Janghwan and Hong, Sukjin and Kim, Minsoo and Ahn, Euijai and Chang, Du-Seong and Choi, Jungwook},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {18091-18100},
  doi       = {10.1609/AAAI.V39I17.33990},
  url       = {https://mlanthology.org/aaai/2025/lee2025aaai-rilq/}
}