Enabling Users to Falsify Deepfake Attacks

Abstract

The rise of deepfake technology has made everyone vulnerable to false claims based on manipulated media. While many existing deepfake detection methods aim to identify fake media, they often struggle with deepfakes created by new generative models not seen during training. In this paper, we propose FACTOR, a method that enables users to prove that the media claiming to show them are false. FACTOR is based on two key assumptions: (i) generative models struggle to exactly depict a specific identity, and (ii) they often fail to perfectly synchronize generated lip movements with speech. By combining these assumptions with powerful modern representation encoders, FACTOR achieves highly effective results, even against previously unseen deepfakes. Through extensive experiments, we demonstrate that FACTOR significantly outperforms state-of-the-art deepfake detection techniques despite being simple to implement and not relying on any fake data for pretraining.

Cite

Text

Reiss et al. "Enabling Users to Falsify Deepfake Attacks." Transactions on Machine Learning Research, 2025.

Markdown

[Reiss et al. "Enabling Users to Falsify Deepfake Attacks." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/reiss2025tmlr-enabling/)

BibTeX

@article{reiss2025tmlr-enabling,
  title     = {{Enabling Users to Falsify Deepfake Attacks}},
  author    = {Reiss, Tal and Cavia, Bar and Hoshen, Yedid},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/reiss2025tmlr-enabling/}
}