Path-Specific Objectives for Safer Agent Incentives

Abstract

We present a general framework for training safe agents whose naive incentives are unsafe. As an example, manipulative or deceptive behaviour can improve rewards but should be avoided. Most approaches fail here: agents maximize expected return by any means necessary. We formally describe settings with `delicate' parts of the state which should not be used as a means to an end. We then train agents to maximize the causal effect of actions on the expected return which is not mediated by the delicate parts of state, using Causal Influence Diagram analysis. The resulting agents have no incentive to control the delicate state. We further show how our framework unifies and generalizes existing proposals.

Cite

Text

Farquhar et al. "Path-Specific Objectives for Safer Agent Incentives." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I9.21186

Markdown

[Farquhar et al. "Path-Specific Objectives for Safer Agent Incentives." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/farquhar2022aaai-path/) doi:10.1609/AAAI.V36I9.21186

BibTeX

@inproceedings{farquhar2022aaai-path,
  title     = {{Path-Specific Objectives for Safer Agent Incentives}},
  author    = {Farquhar, Sebastian and Carey, Ryan and Everitt, Tom},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {9529-9538},
  doi       = {10.1609/AAAI.V36I9.21186},
  url       = {https://mlanthology.org/aaai/2022/farquhar2022aaai-path/}
}