GRU: Mitigating the Trade-Off Between Unlearning and Retention for LLMs

Abstract

Large language model (LLM) unlearning has demonstrated its essential role in removing privacy and copyright-related responses, crucial for their legal and safe applications. However, the pursuit of complete unlearning often comes with substantial costs due to its compromises in their general functionality, leading to a notorious trade-off between unlearning and retention. It motivates this paper to explore enhanced unlearning schemes that can mitigate this trade-off. Specifically, we propose Gradient Rectified Unlearning (GRU), an improved framework that regulates the directions of gradient updates during the unlearning procedure such that their side impacts on other, unrelated responses can be minimized. GRU is easy and general to implement, demonstrating practical effectiveness across a variety of well-established unlearning benchmarks.

Cite

Text

Wang et al. "GRU: Mitigating the Trade-Off Between Unlearning and Retention for LLMs." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Wang et al. "GRU: Mitigating the Trade-Off Between Unlearning and Retention for LLMs." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/wang2025icml-gru/)

BibTeX

@inproceedings{wang2025icml-gru,
  title     = {{GRU: Mitigating the Trade-Off Between Unlearning and Retention for LLMs}},
  author    = {Wang, Yue and Wang, Qizhou and Liu, Feng and Huang, Wei and Du, Yali and Du, Xiaojiang and Han, Bo},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {64690-64710},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/wang2025icml-gru/}
}