Generalization-Preserved Learning: Closing the Backdoor to Catastrophic Forgetting in Continual Deepfake Detection
Abstract
Existing continual deepfake detection methods typically treat stability (retaining previously learned forgery knowl- edge) and plasticity (adapting to novel forgeries) as con- flicting properties, emphasizing an inherent trade-off be- tween them, while regarding generalization to unseen forg- eries as secondary. In contrast, we reframe the problem: stability and plasticity can coexist and be jointly improved through the model's inherent generalization. Specifically, we propose Generalization-Preserved Learning (GPL), a novel framework consisting of two key components: (1) Hy- perbolic Visual Alignment, which introduces learnable wa- termarks to align incremental data with the base set in hy- perbolic space, alleviating inter-task distribution shifts; (2) Generalized Gradient Projection, which prevents parame- ter updates that conflict with generalization constraints, en- suring new knowledge learning does not interfere with pre- viously acquired knowledge. Notably, GPL requires nei- ther backbone retraining nor historical data storage. Ex- periments conducted on four mainstream datasets (FF++, Celeb-DF v2, DFD, and DFDCP) demonstrate that GPL achieves an accuracy of 92.14%, outperforming replay- based state-of-the-art methods by 2.15%, while reducing forgetting by 2.66%. Moreover, GPL achieves an 18.38% improvement on unseen forgeries using only 1% of baseline parameters, thus presenting an efficient adaptation to con- tinuously evolving forgery techniques.
Cite
Text
Zhang et al. "Generalization-Preserved Learning: Closing the Backdoor to Catastrophic Forgetting in Continual Deepfake Detection." International Conference on Computer Vision, 2025.Markdown
[Zhang et al. "Generalization-Preserved Learning: Closing the Backdoor to Catastrophic Forgetting in Continual Deepfake Detection." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/zhang2025iccv-generalizationpreserved/)BibTeX
@inproceedings{zhang2025iccv-generalizationpreserved,
title = {{Generalization-Preserved Learning: Closing the Backdoor to Catastrophic Forgetting in Continual Deepfake Detection}},
author = {Zhang, Xueyi and Zhu, Peiyin and Zhang, Chengwei and Yan, Zhiyuan and Cheng, Jikang and Lao, Mingrui and Cai, Siqi and Guo, Yanming},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {3798-3808},
url = {https://mlanthology.org/iccv/2025/zhang2025iccv-generalizationpreserved/}
}