Optimal Policies Tend to Seek Power

Abstract

Some researchers speculate that intelligent reinforcement learning (RL) agents would be incentivized to seek resources and power in pursuit of the objectives we specify for them. Other researchers point out that RL agents need not have human-like power-seeking instincts. To clarify this discussion, we develop the first formal theory of the statistical tendencies of optimal policies. In the context of Markov decision processes, we prove that certain environmental symmetries are sufficient for optimal policies to tend to seek power over the environment. These symmetries exist in many environments in which the agent can be shut down or destroyed. We prove that in these environments, most reward functions make it optimal to seek power by keeping a range of options available and, when maximizing average reward, by navigating towards larger sets of potential terminal states.

Cite

Text

Turner et al. "Optimal Policies Tend to Seek Power." Neural Information Processing Systems, 2021.

Markdown

[Turner et al. "Optimal Policies Tend to Seek Power." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/turner2021neurips-optimal/)

BibTeX

@inproceedings{turner2021neurips-optimal,
  title     = {{Optimal Policies Tend to Seek Power}},
  author    = {Turner, Alex and Smith, Logan and Shah, Rohin and Critch, Andrew and Tadepalli, Prasad},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/turner2021neurips-optimal/}
}