Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning

Abstract

Machine unlearning has raised significant interest with the adoption of laws ensuring the ``right to be forgotten''. Researchers have provided a probabilistic notion of approximate unlearning under a similar definition of Differential Privacy (DP), where privacy is defined as statistical indistinguishability to retraining from scratch. We propose Langevin unlearning, an unlearning framework based on noisy gradient descent with privacy guarantees for approximate unlearning problems. Langevin unlearning unifies the DP learning process and the privacy-certified unlearning process with many algorithmic benefits. These include approximate certified unlearning for non-convex problems, complexity saving compared to retraining, sequential and batch unlearning for multiple unlearning requests.

Cite

Text

Chien et al. "Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning." Neural Information Processing Systems, 2024. doi:10.52202/079017-2530

Markdown

[Chien et al. "Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/chien2024neurips-langevin/) doi:10.52202/079017-2530

BibTeX

@inproceedings{chien2024neurips-langevin,
  title     = {{Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning}},
  author    = {Chien, Eli and Wang, Haoyu and Chen, Ziang and Li, Pan},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2530},
  url       = {https://mlanthology.org/neurips/2024/chien2024neurips-langevin/}
}