Unlearning In- vs. Out-of-Distribution Data in LLMs Under Gradient-Based Methods

Abstract

Machine unlearning aims to solve the problem of removing the influence of selected training examples from a learned model. Despite the increasing attention to this problem, it remains an open research question how to evaluate unlearning in large language models (LLMs), and what are the critical properties of the data to be unlearned that affect the quality and efficiency of unlearning. This work formalizes a metric to evaluate unlearning quality in generative models, and uses it to assess the trade-offs between unlearning quality and performance. We demonstrate that unlearning out-of-distribution examples requires more unlearning steps but overall presents a better trade-off overall. For in-distribution examples, however, we observe a rapid decay in performance as unlearning progresses. We further evaluate how example's memorization and difficulty affect unlearning under a classical gradient ascent-based approach.

Cite

Text

Baluta et al. "Unlearning In- vs. Out-of-Distribution Data in LLMs Under Gradient-Based Methods." NeurIPS 2024 Workshops: SafeGenAi, 2024.

Markdown

[Baluta et al. "Unlearning In- vs. Out-of-Distribution Data in LLMs Under Gradient-Based Methods." NeurIPS 2024 Workshops: SafeGenAi, 2024.](https://mlanthology.org/neuripsw/2024/baluta2024neuripsw-unlearning/)

BibTeX

@inproceedings{baluta2024neuripsw-unlearning,
  title     = {{Unlearning In- vs. Out-of-Distribution Data in LLMs Under Gradient-Based Methods}},
  author    = {Baluta, Teodora and Lamblin, Pascal and Tarlow, Daniel and Pedregosa, Fabian and Dziugaite, Gintare Karolina},
  booktitle = {NeurIPS 2024 Workshops: SafeGenAi},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/baluta2024neuripsw-unlearning/}
}