Text to Stealthy Adversarial Face Masks
Abstract
Recent studies have demonstrated that modern facial recognition systems, which are based on deep neural networks, are vulnerable to adversarial attacks, including the use of accessories, makeup patterns, or precision lighting. However, developing attacks that are both robust (resilient to changes in viewing angles and environmental conditions) and stealthy (do not attract suspicion by, for example, incorporating obvious facial features) remains a significant challenge. In this context, we introduce a novel diffusion-based method (DAFR) capable of generating robust and stealthy face masks for dodging recognition systems (where the system fails to identify the attacker). Specifically our approach is capable of producing high-fidelity printable textures using the guidance of textual prompts to determine the style. This method can also be adapted for impersonation purposes, where the system misidentifies the attacker as a specific other individual. Finally, we address a gap in the existing literature by presenting a comprehensive benchmark (FAAB) for evaluating adversarial accessories in three dimensions, assessing their robustness and stealthiness.
Cite
Text
Lewis et al. "Text to Stealthy Adversarial Face Masks." Transactions on Machine Learning Research, 2025.Markdown
[Lewis et al. "Text to Stealthy Adversarial Face Masks." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/lewis2025tmlr-text/)BibTeX
@article{lewis2025tmlr-text,
title = {{Text to Stealthy Adversarial Face Masks}},
author = {Lewis, Ben and Moyse, Thomas and Parkinson, James and Telford, Elizabeth and Whitfield, Callum and Lazic, Ranko},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/lewis2025tmlr-text/}
}