Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning

Abstract

Large language models (LLMs) have achieved remarkable performance on vari- ous natural language tasks. However, they are trained on static corpora and their knowledge can become outdated quickly in the fast-changing world. This moti- vates the development of knowledge editing methods designed to update certain knowledge in LLMs without changing unrelated others. To make selective edits, previous efforts often sought to update a small amount of parameters in some spe- cific layer(s) of a LLM. Nonetheless, in challenging scenarios, they still fall short in making successful edits while preserving knowledge irrelevant to the updates simultaneously, resulting in a notable editing-locality trade-off. In this work, we question if the trade-offs are caused by the fact that parameter-based updates have a global effect, i.e., edited parameters affect all inputs indiscriminately. In light of this, we explore the feasibility of representation fine-tuning, which applied some linear update to a few representations in a learned subspace, for knowledge edit- ing. While being effective to enhance an LLM’s general ability as demonstrated in the previous work, we theoretically show that this linear update imposes a tension in editing-locality trade-off. Subsequently, BaFT is proposed to break the linear- ity. BaFT computes a weight for each basis that spans a dimension of the subspace based on the input representation. This input-dependent weighting mechanism al- lows BaFT to manage different types of knowledge in an adaptive way, thereby achieving a better editing-locality trade-off. Experiments on three LLMs with five editing benchmarks in diverse scenarios show the superiority of our method.

Cite

Text

Liu et al. "Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning." International Conference on Learning Representations, 2025.

Markdown

[Liu et al. "Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/liu2025iclr-unlocking/)

BibTeX

@inproceedings{liu2025iclr-unlocking,
  title     = {{Unlocking Efficient, Scalable, and Continual Knowledge Editing with Basis-Level Representation Fine-Tuning}},
  author    = {Liu, Tianci and Li, Ruirui and Qi, Yunzhe and Liu, Hui and Tang, Xianfeng and Zheng, Tianqi and Yin, Qingyu and Cheng, Monica Xiao and Huan, Jun and Wang, Haoyu and Gao, Jing},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/liu2025iclr-unlocking/}
}