Three Strategies to Success: Learning Adversary Models in Security Games

Abstract

State-of-the-art applications of Stackelberg security games — including wildlife protection — offer a wealth of data, which can be used to learn the behavior of the adversary. But existing approaches either make strong assumptions about the structure of the data, or gather new data through online algorithms that are likely to play severely suboptimal strategies. We develop a new approach to learning the parameters of the behavioral model of a bounded rational attacker (thereby pinpointing a near optimal strategy), by observing how the attacker responds to only three defender strategies. We also validate our approach using experiments on real and synthetic data. PDF

Cite

Text

Haghtalab et al. "Three Strategies to Success: Learning Adversary Models in Security Games." International Joint Conference on Artificial Intelligence, 2016.

Markdown

[Haghtalab et al. "Three Strategies to Success: Learning Adversary Models in Security Games." International Joint Conference on Artificial Intelligence, 2016.](https://mlanthology.org/ijcai/2016/haghtalab2016ijcai-three/)

BibTeX

@inproceedings{haghtalab2016ijcai-three,
  title     = {{Three Strategies to Success: Learning Adversary Models in Security Games}},
  author    = {Haghtalab, Nika and Fang, Fei and Nguyen, Thanh Hong and Sinha, Arunesh and Procaccia, Ariel D. and Tambe, Milind},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {308-314},
  url       = {https://mlanthology.org/ijcai/2016/haghtalab2016ijcai-three/}
}