LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty
Abstract
We present LoTUS, a novel Machine Unlearning (MU) method that eliminates the influence of training samples from pre-trained models, avoiding retraining from scratch. LoTUS smooths the prediction probabilities of the model up to an information-theoretic bound, mitigating its over-confidence stemming from data memorization. We evaluate LoTUS on Transformer and ResNet18 models against eight baselines across five public datasets. Beyond established MU benchmarks, we evaluate unlearning on ImageNet1k, a large-scale dataset, where retraining is impractical, simulating real-world conditions. Moreover, we introduce the novel Retrain-Free Jensen-Shannon Divergence (RF-JSD) metric to enable evaluation under real-world conditions. The experimental results show that LoTUS outperforms state-of-the-art methods in terms of both efficiency and effectiveness. Code: https://github.com/cspartalis/LoTUS
Cite
Text
Spartalis et al. "LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.00939Markdown
[Spartalis et al. "LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/spartalis2025cvpr-lotus/) doi:10.1109/CVPR52734.2025.00939BibTeX
@inproceedings{spartalis2025cvpr-lotus,
title = {{LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty}},
author = {Spartalis, Christoforos N. and Semertzidis, Theodoros and Gavves, Efstratios and Daras, Petros},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2025},
pages = {10046-10055},
doi = {10.1109/CVPR52734.2025.00939},
url = {https://mlanthology.org/cvpr/2025/spartalis2025cvpr-lotus/}
}