Detect Fake with Fake: Leveraging Synthetic Data-Driven Representation for Synthetic Image Detection

Abstract

Are general-purpose visual representations acquired solely from synthetic data useful for detecting fake images? In this work, we show the effectiveness of synthetic data-driven representations for synthetic image detection. Upon analysis, we find that vision transformers trained by the latest visual representation learners with synthetic data can effectively distinguish fake from real images without seeing any real images during pre-training. Notably, using SynCLR as the backbone in a state-of-the-art detection method demonstrates a performance improvement of $\boldsymbol{+10.32}$ + 10.32 mAP and $\boldsymbol{+4.73}\%$ + 4.73 % accuracy over the widely used CLIP, when tested on previously unseen GAN models. Code is available at https://github.com/cvpaperchallenge/detect-fake-with-fake .

Cite

Text

Otake et al. "Detect Fake with Fake: Leveraging Synthetic Data-Driven Representation for Synthetic Image Detection." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-92648-8_19

Markdown

[Otake et al. "Detect Fake with Fake: Leveraging Synthetic Data-Driven Representation for Synthetic Image Detection." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/otake2024eccvw-detect/) doi:10.1007/978-3-031-92648-8_19

BibTeX

@inproceedings{otake2024eccvw-detect,
  title     = {{Detect Fake with Fake: Leveraging Synthetic Data-Driven Representation for Synthetic Image Detection}},
  author    = {Otake, Hina and Fukuhara, Yoshihiro and Kubotani, Yoshiki and Morishima, Shigeo},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2024},
  pages     = {316-332},
  doi       = {10.1007/978-3-031-92648-8_19},
  url       = {https://mlanthology.org/eccvw/2024/otake2024eccvw-detect/}
}