Implicit Identity Leakage: The Stumbling Block to Improving Deepfake Detection Generalization
Abstract
In this paper, we analyse the generalization ability of binary classifiers for the task of deepfake detection. We find that the stumbling block to their generalization is caused by the unexpected learned identity representation on images. Termed as the Implicit Identity Leakage, this phenomenon has been qualitatively and quantitatively verified among various DNNs. Furthermore, based on such understanding, we propose a simple yet effective method named the ID-unaware Deepfake Detection Model to reduce the influence of this phenomenon. Extensive experimental results demonstrate that our method outperforms the state-of-the-art in both in-dataset and cross-dataset evaluation. The code is available at https://github.com/megvii-research/CADDM.
Cite
Text
Dong et al. "Implicit Identity Leakage: The Stumbling Block to Improving Deepfake Detection Generalization." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00389Markdown
[Dong et al. "Implicit Identity Leakage: The Stumbling Block to Improving Deepfake Detection Generalization." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/dong2023cvpr-implicit/) doi:10.1109/CVPR52729.2023.00389BibTeX
@inproceedings{dong2023cvpr-implicit,
title = {{Implicit Identity Leakage: The Stumbling Block to Improving Deepfake Detection Generalization}},
author = {Dong, Shichao and Wang, Jin and Ji, Renhe and Liang, Jiajun and Fan, Haoqiang and Ge, Zheng},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {3994-4004},
doi = {10.1109/CVPR52729.2023.00389},
url = {https://mlanthology.org/cvpr/2023/dong2023cvpr-implicit/}
}