Behavioral Learning in Security Games: Threat of Multi-Step Manipulative Attacks
Abstract
This paper studies the problem of multi-step manipulative attacks in Stackelberg security games, in which a clever attacker attempts to orchestrate its attacks over multiple time steps to mislead the defender's learning of the attacker's behavior. This attack manipulation eventually influences the defender's patrol strategy towards the attacker's benefit. Previous work along this line of research only focuses on one-shot games in which the defender learns the attacker's behavior and then designs a corresponding strategy only once. Our work, on the other hand, investigates the long-term impact of the attacker's manipulation in which current attack and defense choices of players determine the future learning and patrol planning of the defender. This paper has three key contributions. First, we introduce a new multi-step manipulative attack game model that captures the impact of sequential manipulative attacks carried out by the attacker over the entire time horizon. Second, we propose a new algorithm to compute an optimal manipulative attack plan for the attacker, which tackles the challenge of multiple connected optimization components involved in the computation across multiple time steps. Finally, we present extensive experimental results on the impact of such misleading attacks, showing a significant benefit for the attacker and loss for the defender.
Cite
Text
Nguyen and Sinha. "Behavioral Learning in Security Games: Threat of Multi-Step Manipulative Attacks." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I8.26115Markdown
[Nguyen and Sinha. "Behavioral Learning in Security Games: Threat of Multi-Step Manipulative Attacks." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/nguyen2023aaai-behavioral/) doi:10.1609/AAAI.V37I8.26115BibTeX
@inproceedings{nguyen2023aaai-behavioral,
title = {{Behavioral Learning in Security Games: Threat of Multi-Step Manipulative Attacks}},
author = {Nguyen, Thanh Hong and Sinha, Arunesh},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {9302-9309},
doi = {10.1609/AAAI.V37I8.26115},
url = {https://mlanthology.org/aaai/2023/nguyen2023aaai-behavioral/}
}