HiRA: Parameter-Efficient Hadamard High-Rank Adaptation for Large Language Models
Abstract
We propose Hadamard High-Rank Adaptation (HiRA), a parameter-efficient fine-tuning (PEFT) method that enhances the adaptability of Large Language Models (LLMs). While Low-rank Adaptation (LoRA) is widely used to reduce resource demands, its low-rank updates may limit its expressiveness for new tasks. HiRA addresses this by using a Hadamard product to retain high-rank update parameters, improving the model capacity. Empirically, HiRA outperforms LoRA and its variants on several tasks, with extensive ablation studies validating its effectiveness. Our code is available at https://github.com/hqsiswiliam/hira.
Cite
Text
Huang et al. "HiRA: Parameter-Efficient Hadamard High-Rank Adaptation for Large Language Models." International Conference on Learning Representations, 2025.Markdown
[Huang et al. "HiRA: Parameter-Efficient Hadamard High-Rank Adaptation for Large Language Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/huang2025iclr-hira/)BibTeX
@inproceedings{huang2025iclr-hira,
title = {{HiRA: Parameter-Efficient Hadamard High-Rank Adaptation for Large Language Models}},
author = {Huang, Qiushi and Ko, Tom and Zhuang, Zhan and Tang, Lilian and Zhang, Yu},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/huang2025iclr-hira/}
}