Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks

Abstract

We study a security threat to reinforcement learning where an attacker poisons the learning environment to force the agent into executing a target policy chosen by the attacker. As a victim, we consider RL agents whose objective is to find a policy that maximizes reward in infinite-horizon problem settings. The attacker can manipulate the rewards and the transition dynamics in the learning environment at training-time, and is interested in doing so in a stealthy manner. We propose an optimization framework for finding an optimal stealthy attack for different measures of attack cost. We provide lower/upper bounds on the attack cost, and instantiate our attacks in two settings: (i) an offline setting where the agent is doing planning in the poisoned environment, and (ii) an online setting where the agent is learning a policy with poisoned feedback. Our results show that the attacker can easily succeed in teaching any target policy to the victim under mild conditions and highlight a significant security threat to reinforcement learning agents in practice.

Cite

Text

Rakhsha et al. "Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks." Journal of Machine Learning Research, 2021.

Markdown

[Rakhsha et al. "Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks." Journal of Machine Learning Research, 2021.](https://mlanthology.org/jmlr/2021/rakhsha2021jmlr-policy/)

BibTeX

@article{rakhsha2021jmlr-policy,
  title     = {{Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks}},
  author    = {Rakhsha, Amin and Radanovic, Goran and Devidze, Rati and Zhu, Xiaojin and Singla, Adish},
  journal   = {Journal of Machine Learning Research},
  year      = {2021},
  pages     = {1-45},
  volume    = {22},
  url       = {https://mlanthology.org/jmlr/2021/rakhsha2021jmlr-policy/}
}