BACKDOORL: Backdoor Attack Against Competitive Reinforcement Learning
Abstract
Recent research has confirmed the feasibility of backdoor attacks in deep reinforcement learning (RL) systems. However, the existing attacks require the ability to arbitrarily modify an agent's observation, constraining the application scope to simple RL systems such as Atari games. In this paper, we migrate backdoor attacks to more complex RL systems involving multiple agents and explore the possibility of triggering the backdoor without directly manipulating the agent's observation. As a proof of concept, we demonstrate that an adversary agent can trigger the backdoor of the victim agent with its own action in two-player competitive RL systems. We prototype and evaluate BackdooRL in four competitive environments. The results show that when the backdoor is activated, the winning rate of the victim drops by 17% to 37% compared to when not activated. The videos are hosted at https://github.com/wanglun1996/multi_agent_rl_backdoor_videos.
Cite
Text
Wang et al. "BACKDOORL: Backdoor Attack Against Competitive Reinforcement Learning." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/509Markdown
[Wang et al. "BACKDOORL: Backdoor Attack Against Competitive Reinforcement Learning." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/wang2021ijcai-backdoorl/) doi:10.24963/IJCAI.2021/509BibTeX
@inproceedings{wang2021ijcai-backdoorl,
title = {{BACKDOORL: Backdoor Attack Against Competitive Reinforcement Learning}},
author = {Wang, Lun and Javed, Zaynah and Wu, Xian and Guo, Wenbo and Xing, Xinyu and Song, Dawn},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2021},
pages = {3699-3705},
doi = {10.24963/IJCAI.2021/509},
url = {https://mlanthology.org/ijcai/2021/wang2021ijcai-backdoorl/}
}