Multi-Concept Model Immunization Through Differentiable Model Merging
Abstract
Model immunization is an emerging direction that aims to mitigate the potential risk of misuse associated with open-sourced models and advancing adaptation methods. The idea is to make the released models' weights difficult to fine-tune on certain harmful applications, hence the name "immunized". Recent work on model immunization focuses on the single-concept setting. However, in real-world situations, models need to be immunized against multiple concepts. To address this gap, we propose an immunization algorithm that, simultaneously, learns a single "difficult initialization" for adaptation methods over a set of concepts. We achieve this by incorporating a differentiable merging layer that combines a set of model weights adapted over multiple concepts. In our experiments, we demonstrate the effectiveness of multi-concept immunization by generalizing prior work's experiment setup of re-learning and personalization adaptation to multiple concepts.
Cite
Text
Zheng and Yeh. "Multi-Concept Model Immunization Through Differentiable Model Merging." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I10.33145Markdown
[Zheng and Yeh. "Multi-Concept Model Immunization Through Differentiable Model Merging." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/zheng2025aaai-multi-a/) doi:10.1609/AAAI.V39I10.33145BibTeX
@inproceedings{zheng2025aaai-multi-a,
title = {{Multi-Concept Model Immunization Through Differentiable Model Merging}},
author = {Zheng, Amber Yijia and Yeh, Raymond A.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {10546-10554},
doi = {10.1609/AAAI.V39I10.33145},
url = {https://mlanthology.org/aaai/2025/zheng2025aaai-multi-a/}
}