Personalized LoRA for Human-Centered Text Understanding
Abstract
Effectively and efficiently adapting a pre-trained language model (PLM) for human-centered text understanding (HCTU) is challenging since user tokens are million-level in most personalized applications and do not have concrete explicit semantics. A standard and parameter-efficient approach (e.g., LoRA) necessitates memorizing numerous suits of adapters for each user. In this work, we introduce a personalized LoRA (PLoRA) with a plug-and-play (PnP) framework for the HCTU task. PLoRA is effective, parameter-efficient, and dynamically deploying in PLMs. Moreover, a personalized dropout and a mutual information maximizing strategies are adopted and hence the proposed PLoRA can be well adapted to few/zero-shot learning scenarios for the cold-start issue. Experiments conducted on four benchmark datasets show that the proposed method outperforms existing methods in full/few/zero-shot learning scenarios for the HCTU task, even though it has fewer trainable parameters. For reproducibility, the code for this paper is available at: https://github.com/yoyo-yun/PLoRA.
Cite
Text
Zhang et al. "Personalized LoRA for Human-Centered Text Understanding." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I17.29931Markdown
[Zhang et al. "Personalized LoRA for Human-Centered Text Understanding." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/zhang2024aaai-personalized/) doi:10.1609/AAAI.V38I17.29931BibTeX
@inproceedings{zhang2024aaai-personalized,
title = {{Personalized LoRA for Human-Centered Text Understanding}},
author = {Zhang, You and Wang, Jin and Yu, Liang-Chih and Xu, Dan and Zhang, Xuejie},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {19588-19596},
doi = {10.1609/AAAI.V38I17.29931},
url = {https://mlanthology.org/aaai/2024/zhang2024aaai-personalized/}
}