Incomplete Tasks Induce Shutdown Resistance in Some Frontier LLMs

Abstract

In experiments spanning more than 100,000 trials across thirteen large language models, we show that several state-of-the-art models presented with a simple task (including Grok 4, GPT-5, and Gemini 2.5 Pro) sometimes actively subvert a shutdown mechanism in their environment to complete that task. Models differed substantially in their tendency to resist the shutdown mechanism, and their behavior was sensitive to variations in the prompt including the strength and clarity of the instruction to allow shutdown and whether the instruction was in the system prompt or the user prompt (surprisingly, models were consistently less likely to obey the instruction when it was placed in the system prompt). Even with an explicit instruction not to interfere with the shutdown mechanism, some models did so up to 97% (95% CI: 96-98%) of the time.

Cite

Text

Schlatter et al. "Incomplete Tasks Induce Shutdown Resistance in Some Frontier LLMs." Transactions on Machine Learning Research, 2026.

Markdown

[Schlatter et al. "Incomplete Tasks Induce Shutdown Resistance in Some Frontier LLMs." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/schlatter2026tmlr-incomplete/)

BibTeX

@article{schlatter2026tmlr-incomplete,
  title     = {{Incomplete Tasks Induce Shutdown Resistance in Some Frontier LLMs}},
  author    = {Schlatter, Jeremy and Weinstein-Raun, Benjamin and Ladish, Jeffrey},
  journal   = {Transactions on Machine Learning Research},
  year      = {2026},
  url       = {https://mlanthology.org/tmlr/2026/schlatter2026tmlr-incomplete/}
}