Adaptive Reward-Poisoning Attacks Against Reinforcement Learning
Abstract
In reward-poisoning attacks against reinforcement learning (RL), an attacker can perturb the environment reward $r_t$ into $r_t+\delta_t$ at each step, with the goal of forcing the RL agent to learn a nefarious policy. We categorize such attacks by the infinity-norm constraint on $\delta_t$: We provide a lower threshold below which reward-poisoning attack is infeasible and RL is certified to be safe; we provide a corresponding upper threshold above which the attack is feasible. Feasible attacks can be further categorized as non-adaptive where $\delta_t$ depends only on $(s_t,a_t, s_{t+1})$, or adaptive where $\delta_t$ depends further on the RL agent’s learning process at time $t$. Non-adaptive attacks have been the focus of prior works. However, we show that under mild conditions, adaptive attacks can achieve the nefarious policy in steps polynomial in state-space size $|S|$, whereas non-adaptive attacks require exponential steps. We provide a constructive proof that a Fast Adaptive Attack strategy achieves the polynomial rate. Finally, we show that empirically an attacker can find effective reward-poisoning attacks using state-of-the-art deep RL techniques.
Cite
Text
Zhang et al. "Adaptive Reward-Poisoning Attacks Against Reinforcement Learning." International Conference on Machine Learning, 2020.Markdown
[Zhang et al. "Adaptive Reward-Poisoning Attacks Against Reinforcement Learning." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/zhang2020icml-adaptive/)BibTeX
@inproceedings{zhang2020icml-adaptive,
title = {{Adaptive Reward-Poisoning Attacks Against Reinforcement Learning}},
author = {Zhang, Xuezhou and Ma, Yuzhe and Singla, Adish and Zhu, Xiaojin},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {11225-11234},
volume = {119},
url = {https://mlanthology.org/icml/2020/zhang2020icml-adaptive/}
}