Toward Efficient Data-Free Unlearning

Abstract

Machine unlearning without access to real data distribution is challenging. The existing method based on data-free distillation achieved unlearning by filtering out synthetic samples containing forgetting information but struggled to distill the retaining-related knowledge efficiently. In this work, we analyze that such a problem is due to over-filtering, which reduces the synthesized retaining-related information. We propose a novel method, Inhibited Synthetic PostFilter (ISPF), to tackle this challenge from two perspectives: First, the Inhibited Synthetic, by reducing the synthesized forgetting information; Second, the PostFilter, by fully utilizing the retaining-related information in synthesized samples. Experimental results demonstrate that the proposed ISPF effectively tackles the challenge and outperforms existing methods.

Cite

Text

Zhang et al. "Toward Efficient Data-Free Unlearning." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I21.34393

Markdown

[Zhang et al. "Toward Efficient Data-Free Unlearning." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/zhang2025aaai-efficient/) doi:10.1609/AAAI.V39I21.34393

BibTeX

@inproceedings{zhang2025aaai-efficient,
  title     = {{Toward Efficient Data-Free Unlearning}},
  author    = {Zhang, Chenhao and Shen, Shaofei and Chen, Weitong and Xu, Miao},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {22372-22379},
  doi       = {10.1609/AAAI.V39I21.34393},
  url       = {https://mlanthology.org/aaai/2025/zhang2025aaai-efficient/}
}