In-Context Meta LoRA Generation
Abstract
Low-rank Adaptation (LoRA) has demonstrated remarkable capabilities for task specific fine-tuning. However, in scenarios that involve multiple tasks, training a separate LoRA model for each one results in considerable inefficiency in terms of storage and inference. Moreover, existing parameter generation methods fail to capture the correlations among these tasks, making multi-task LoRA parameter generation challenging. To address these limitations, we propose In-Context Meta LoRA (ICM-LoRA), a novel approach that efficiently achieves task-specific customization of large language models (LLMs). Specifically, we use training data from all tasks to train a tailored generator, Conditional Variational Autoencoder (CVAE). CVAE takes task descriptions as inputs and produces task-aware LoRA weights as outputs. These LoRA weights are then merged with LLMs to create task-specialized models without the need for additional fine-tuning. Furthermore, we utilize in-context meta-learning for knowledge enhancement and task mapping, to capture the relationship between tasks and parameter distributions. As a result, our method achieves more accurate LoRA parameter generation for diverse tasks using CVAE. ICM-LoRA enables more accurate LoRA parameter reconstruction than current parameter reconstruction methods and is useful for implementing task-specific enhancements of LoRA parameters. At the same time, our method occupies 283MB, only 1% storage compared with the original LoRA. The code is available at https://github.com/YihuaJerry/ICM-LoRA.
Cite
Text
Shao et al. "In-Context Meta LoRA Generation." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/683Markdown
[Shao et al. "In-Context Meta LoRA Generation." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/shao2025ijcai-context/) doi:10.24963/IJCAI.2025/683BibTeX
@inproceedings{shao2025ijcai-context,
title = {{In-Context Meta LoRA Generation}},
author = {Shao, Yihua and Yan, Minxi and Liu, Yang and Chen, Siyu and Chen, Wenjie and Long, Xinwei and Yan, Ziyang and Li, Lei and Zhang, Chenyu and Sebe, Nicu and Tang, Hao and Wang, Yan and Zhao, Hao and Wang, Mengzhu and Guo, Jingcai},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2025},
pages = {6138-6146},
doi = {10.24963/IJCAI.2025/683},
url = {https://mlanthology.org/ijcai/2025/shao2025ijcai-context/}
}