PDUDT: Provable Decentralized Unlearning Under Dynamic Topologies
Abstract
This paper investigates decentralized unlearning, aiming to eliminate the impact of a specific client on the whole decentralized system. However, decentralized communication characterizations pose new challenges for effective unlearning: the indirect connections make it difficult to trace the specific client’s impact, while the dynamic topology limits the scalability of retraining-based unlearning methods. In this paper, we propose the first Provable Decentralized Unlearning algorithm under Dynamic Topologies called PDUDT. It allows clients to eliminate the influence of a specific client without additional communication or retraining. We provide rigorous theoretical guarantees for PDUDT, showing it is statistically indistinguishable from perturbed retraining. Additionally, it achieves an efficient convergence rate of $\mathcal{O}(\frac{1}{T})$ in subsequent learning, where $T$ is the total communication rounds. This rate matches state-of-the-art results. Experimental results show that compared with the Retrain method, PDUDT saves more than 99% of unlearning time while achieving comparable unlearning performance.
Cite
Text
Qiao et al. "PDUDT: Provable Decentralized Unlearning Under Dynamic Topologies." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Qiao et al. "PDUDT: Provable Decentralized Unlearning Under Dynamic Topologies." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/qiao2025icml-pdudt/)BibTeX
@inproceedings{qiao2025icml-pdudt,
title = {{PDUDT: Provable Decentralized Unlearning Under Dynamic Topologies}},
author = {Qiao, Jing and Liu, Yu and Chen, Zengzhe and Li, Mingyi and Yuan, Yuan and Zhang, Xiao and Yu, Dongxiao},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {50142-50170},
volume = {267},
url = {https://mlanthology.org/icml/2025/qiao2025icml-pdudt/}
}