A Closer Look at Machine Unlearning for Large Language Models

Abstract

Large language models (LLMs) may memorize sensitive or copyrighted content, raising privacy and legal concerns. Due to the high cost of retraining from scratch, researchers attempt to employ machine unlearning to remove specific content from LLMs while preserving the overall performance. In this paper, we discuss several issues in machine unlearning for LLMs and provide our insights on possible approaches. To address the issue of inadequate evaluation of model outputs after unlearning, we introduce three additional metrics to evaluate token diversity, sentence semantics, and factual correctness. We then categorize unlearning methods into untargeted and targeted, and discuss their issues respectively. Specifically, the behavior that untargeted unlearning attempts to approximate is unpredictable and may involve hallucinations, and existing regularization is insufficient for targeted unlearning. To alleviate these issues, we propose using the objective of maximizing entropy (ME) for untargeted unlearning and incorporate answer preservation (AP) loss as regularization for targeted unlearning. Experimental results across three scenarios, i.e., fictitious unlearning, continual unlearning, and real-world unlearning, demonstrate the effectiveness of our approaches. The code is available at https://github.com/sail-sg/closer-look-LLM-unlearning.

Cite

Text

Yuan et al. "A Closer Look at Machine Unlearning for Large Language Models." International Conference on Learning Representations, 2025.

Markdown

[Yuan et al. "A Closer Look at Machine Unlearning for Large Language Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/yuan2025iclr-closer/)

BibTeX

@inproceedings{yuan2025iclr-closer,
  title     = {{A Closer Look at Machine Unlearning for Large Language Models}},
  author    = {Yuan, Xiaojian and Pang, Tianyu and Du, Chao and Chen, Kejiang and Zhang, Weiming and Lin, Min},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/yuan2025iclr-closer/}
}