Provable Defense Against Backdoor Policies in Reinforcement Learning
Abstract
We propose a provable defense mechanism against backdoor policies in reinforcement learning under subspace trigger assumption. A backdoor policy is a security threat where an adversary publishes a seemingly well-behaved policy which in fact allows hidden triggers. During deployment, the adversary can modify observed states in a particular way to trigger unexpected actions and harm the agent. We assume the agent does not have the resources to re-train a good policy. Instead, our defense mechanism sanitizes the backdoor policy by projecting observed states to a `safe subspace', estimated from a small number of interactions with a clean (non-triggered) environment. Our sanitized policy achieves $\epsilon$ approximate optimality in the presence of triggers, provided the number of clean interactions is $O\left(\frac{D}{(1-\gamma)^4 \epsilon^2}\right)$ where $\gamma$ is the discounting factor and $D$ is the dimension of state space. Empirically, we show that our sanitization defense performs well on two Atari game environments.
Cite
Text
Bharti et al. "Provable Defense Against Backdoor Policies in Reinforcement Learning." Neural Information Processing Systems, 2022.Markdown
[Bharti et al. "Provable Defense Against Backdoor Policies in Reinforcement Learning." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/bharti2022neurips-provable/)BibTeX
@inproceedings{bharti2022neurips-provable,
title = {{Provable Defense Against Backdoor Policies in Reinforcement Learning}},
author = {Bharti, Shubham and Zhang, Xuezhou and Singla, Adish and Zhu, Xiaojin},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/bharti2022neurips-provable/}
}