Model Immunization from a Condition Number Perspective
Abstract
Model immunization aims to pre-train models that are difficult to fine-tune on harmful tasks while retaining their utility on other non-harmful tasks. Though prior work has shown empirical evidence for immunizing text-to-image models, the key understanding of when immunization is possible and a precise definition of an immunized model remain unclear. In this work, we propose a framework, based on the condition number of a Hessian matrix, to analyze model immunization for linear models. Building on this framework, we design an algorithm with regularization terms to control the resulting condition numbers after pre-training. Empirical results on linear models and non-linear deep-nets demonstrate the effectiveness of the proposed algorithm on model immunization. The code is available at https://github.com/amberyzheng/model-immunization-cond-num.
Cite
Text
Zheng et al. "Model Immunization from a Condition Number Perspective." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Zheng et al. "Model Immunization from a Condition Number Perspective." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/zheng2025icml-model/)BibTeX
@inproceedings{zheng2025icml-model,
title = {{Model Immunization from a Condition Number Perspective}},
author = {Zheng, Amber Yijia and Bai, Site and Bullins, Brian and Yeh, Raymond A.},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {78041-78066},
volume = {267},
url = {https://mlanthology.org/icml/2025/zheng2025icml-model/}
}