Unlocking the Potential of Lightweight Quantized Models for Deepfake Detection
Abstract
Deepfake detection is increasingly crucial due to the rapid rise of AI-generated content. Existing methods achieve high performance relying on computationally intensive large models, making real-time detection on resource-constrained edge devices challenging. Given that deepfake detection is a binary classification task, there is potential for model compression and acceleration. In this paper, we propose a low-bit quantization framework for lightweight and efficient deepfake detection. The Connected Quantized Block extracts common forgery features via the quantized path and retains method-specific textures through the shortcut connections. Additionally, the Shifted Logarithmic Redistribution Quantizer mitigates information loss in near-zero domains by unfolding the unbalanced activations, enabling finer quantization granularity. Comprehensive experiments demonstrate that this new framework significantly reduces 10.8x computational costs and 12.4x storage requirements while maintaining high detection performance, even surpassing SOTA methods using less than 5% FLOPs, paving the way for efficient deepfake detection in resource-limited scenarios.
Cite
Text
Tao et al. "Unlocking the Potential of Lightweight Quantized Models for Deepfake Detection." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/59Markdown
[Tao et al. "Unlocking the Potential of Lightweight Quantized Models for Deepfake Detection." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/tao2025ijcai-unlocking/) doi:10.24963/IJCAI.2025/59BibTeX
@inproceedings{tao2025ijcai-unlocking,
title = {{Unlocking the Potential of Lightweight Quantized Models for Deepfake Detection}},
author = {Tao, Renshuai and Qin, Ziheng and Ding, Yifu and Tan, Chuangchuang and Wang, Jiakai and Wang, Wei},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2025},
pages = {520-528},
doi = {10.24963/IJCAI.2025/59},
url = {https://mlanthology.org/ijcai/2025/tao2025ijcai-unlocking/}
}