MUSO: Achieving Exact Machine Unlearning in Over-Parameterized Regimes

Abstract

Machine unlearning (MU) is to make a well-trained model behave as if it had never been trained on specific data. In today’s over-parameterized models, dominated by neural networks, a common approach is to manually relabel data and fine-tune the well-trained model. It can approximate the MU model in the output space, but the question remains whether it can achieve exact MU, i.e., in the parameter space. We answer this question by employing random feature techniques to construct an analytical framework. Under the premise of model optimization via stochastic gradient descent, we theoretically demonstrated that over-parameterized linear models can achieve exact MU through relabeling specific data. We also extend this work to real-world nonlinear networks and propose an alternating optimization algorithm that unifies the tasks of unlearning and relabeling. The algorithm’s effectiveness, confirmed through numerical experiments, highlights its superior performance in unlearning across various scenarios compared to current state-of-the-art methods, particularly excelling over similar relabeling-based MU approaches.

Cite

Text

Yang et al. "MUSO: Achieving Exact Machine Unlearning in Over-Parameterized Regimes." Machine Learning, 2025. doi:10.1007/S10994-025-06806-0

Markdown

[Yang et al. "MUSO: Achieving Exact Machine Unlearning in Over-Parameterized Regimes." Machine Learning, 2025.](https://mlanthology.org/mlj/2025/yang2025mlj-muso/) doi:10.1007/S10994-025-06806-0

BibTeX

@article{yang2025mlj-muso,
  title     = {{MUSO: Achieving Exact Machine Unlearning in Over-Parameterized Regimes}},
  author    = {Yang, Ruikai and He, Mingzhen and He, Zhenghao and Qiu, Youmei and Huang, Xiaolin},
  journal   = {Machine Learning},
  year      = {2025},
  pages     = {176},
  doi       = {10.1007/S10994-025-06806-0},
  volume    = {114},
  url       = {https://mlanthology.org/mlj/2025/yang2025mlj-muso/}
}