IMMA: Immunizing Text-to-Image Models Against Malicious Adaptation
Abstract
Advancements in open-sourced text-to-image models and fine-tuning methods have led to the increasing risk of malicious adaptation, , fine-tuning to generate harmful/unauthorized content. Recent works, , Glaze or MIST, have developed data-poisoning techniques which protect the data against adaptation methods. In this work, we consider an alternative paradigm for protection. We propose to “immunize” the model by learning model parameters that are difficult for the adaptation methods when fine-tuning malicious content; in short IMMA. Specifically, IMMA should be applied before the release of the model weights to mitigate these risks. Empirical results show IMMA’s effectiveness against malicious adaptations, including mimicking the artistic style and learning of inappropriate/unauthorized content, over three adaptation methods: LoRA, Textual-Inversion, and DreamBooth. The code is available at https://github. com/amberyzheng/IMMA.
Cite
Text
Zheng and Yeh. "IMMA: Immunizing Text-to-Image Models Against Malicious Adaptation." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72933-1_26Markdown
[Zheng and Yeh. "IMMA: Immunizing Text-to-Image Models Against Malicious Adaptation." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/zheng2024eccv-imma/) doi:10.1007/978-3-031-72933-1_26BibTeX
@inproceedings{zheng2024eccv-imma,
title = {{IMMA: Immunizing Text-to-Image Models Against Malicious Adaptation}},
author = {Zheng, Amber Yijia and Yeh, Raymond A.},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72933-1_26},
url = {https://mlanthology.org/eccv/2024/zheng2024eccv-imma/}
}