SLIP: Spoof-Aware One-Class Face Anti-Spoofing with Language Image Pretraining
Abstract
Face anti-spoofing (FAS) plays a pivotal role in ensuring the security and reliability of face recognition systems. With advancements in vision-language pretrained (VLP) models, recent two-class FAS techniques have leveraged the advantages of using VLP guidance, while this potential remains unexplored in one-class FAS methods. The one-class FAS focuses on learning intrinsic liveness features solely from live training images to differentiate between live and spoof faces. However, the lack of spoof training data can lead one-class FAS models to inadvertently incorporate domain information irrelevant to the live/spoof distinction (\eg, facial content), causing performance degradation when tested with a new application domain. To address this issue, we propose a novel framework called Spoof-aware one-class face anti-spoofing with Language Image Pretraining (SLIP). Given that live faces should ideally not be obscured by any spoof-attack-related objects (\eg, paper, or masks) and are assumed to yield zero spoof cue maps, we first propose an effective language-guided spoof cue map estimation to enhance one-class FAS models by simulating whether the underlying faces are covered by attack-related objects and generating corresponding nonzero spoof cue maps. Next, we introduce a novel prompt-driven liveness feature disentanglement to alleviate live/spoof-irrelative domain variations by disentangling live/spoof-relevant and domain-dependent information. Finally, we design an effective augmentation strategy by fusing latent features from live images and spoof prompts to generate spoof-like image features and thus diversify latent spoof features to facilitate the learning of one-class FAS. Our extensive experiments and ablation studies support that SLIP consistently outperforms previous one-class FAS methods.
Cite
Text
Huang et al. "SLIP: Spoof-Aware One-Class Face Anti-Spoofing with Language Image Pretraining." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I4.32385Markdown
[Huang et al. "SLIP: Spoof-Aware One-Class Face Anti-Spoofing with Language Image Pretraining." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/huang2025aaai-slip/) doi:10.1609/AAAI.V39I4.32385BibTeX
@inproceedings{huang2025aaai-slip,
title = {{SLIP: Spoof-Aware One-Class Face Anti-Spoofing with Language Image Pretraining}},
author = {Huang, Pei-Kai and Chong, Jun-Xiong and Chiang, Cheng-Hsuan and Chen, Tzu-Hsien and Liu, Tyng-Luh and Hsu, Chiou-Ting},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {3697-3706},
doi = {10.1609/AAAI.V39I4.32385},
url = {https://mlanthology.org/aaai/2025/huang2025aaai-slip/}
}