Latent Guard: A Safety Framework for Text-to-Image Generation
Abstract
With the ability to generate high-quality images, text-to-image (T2I) models can be exploited for creating inappropriate content. To prevent misuse, existing safety measures are either based on text blacklists, easily circumvented, or harmful content classification, using large datasets for training and offering low flexibility. Here, we propose , a framework designed to improve safety measures in text-to-image generation. Inspired by blacklist-based approaches, learns a latent space on top of the T2I model’s text encoder, where we check the presence of harmful concepts in the input text embeddings. Our framework is composed of a data generation pipeline specific to the task using large language models, ad-hoc architectural components, and a contrastive learning strategy to benefit from the generated data. Our method is evaluated on three datasets and against four baselines. redWarning: This paper contains potentially offensive text and images.
Cite
Text
Liu et al. "Latent Guard: A Safety Framework for Text-to-Image Generation." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73347-5_6Markdown
[Liu et al. "Latent Guard: A Safety Framework for Text-to-Image Generation." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/liu2024eccv-latent/) doi:10.1007/978-3-031-73347-5_6BibTeX
@inproceedings{liu2024eccv-latent,
title = {{Latent Guard: A Safety Framework for Text-to-Image Generation}},
author = {Liu, Runtao and Khakzar, Ashkan and Gu, Jindong and Chen, Qifeng and Torr, Philip and Pizzati, Fabio},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-73347-5_6},
url = {https://mlanthology.org/eccv/2024/liu2024eccv-latent/}
}