Self-Destructing Models: Increasing the Costs of Harmful Dual Uses in Foundation Models

Abstract

A growing ecosystem of large, open-source foundation models has reduced the labeled data and technical expertise necessary to apply machine learning to many new problems. Yet foundation models pose a clear dual-use risk, indiscriminately reducing the costs of building both harmful and benign machine learning systems. To mitigate this risk, we propose the task blocking paradigm, in which foundation models are trained with an additional mechanism to impede adaptation to harmful tasks while retaining good performance on desired tasks. We call the resulting models self-destructing models, inspired by mechanisms that prevent adversaries from using tools for harmful purposes. We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning, showing that it can largely prevent a BERT-based model from learning to perform gender identification without harming the model's ability to perform profession classification. We conclude with a discussion of future directions.

Cite

Text

Mitchell et al. "Self-Destructing Models: Increasing the Costs of Harmful Dual Uses in Foundation Models." ICML 2022 Workshops: Pre-Training, 2022.

Markdown

[Mitchell et al. "Self-Destructing Models: Increasing the Costs of Harmful Dual Uses in Foundation Models." ICML 2022 Workshops: Pre-Training, 2022.](https://mlanthology.org/icmlw/2022/mitchell2022icmlw-selfdestructing/)

BibTeX

@inproceedings{mitchell2022icmlw-selfdestructing,
  title     = {{Self-Destructing Models: Increasing the Costs of Harmful Dual Uses in Foundation Models}},
  author    = {Mitchell, Eric and Henderson, Peter and Manning, Christopher D and Jurafsky, Dan and Finn, Chelsea},
  booktitle = {ICML 2022 Workshops: Pre-Training},
  year      = {2022},
  url       = {https://mlanthology.org/icmlw/2022/mitchell2022icmlw-selfdestructing/}
}