Improving Resource Allocation Strategy Against Human Adversaries in Security Games
Abstract
Recent real-world deployments of Stackelberg security games make it critical that we address human adversaries' bounded rationality in computing optimal strategies. To that end, this paper provides three key contributions: (i) new efficient algorithms for computing optimal strategic solutions using Prospect Theory and Quantal Response Equilibrium; (ii) the most comprehensive experiment to date studying the effectiveness of different models against human subjects for security games; and (iii) new techniques for generating representative payoff structures for behavioral experiments in generic classes of games. Our results with human subjects show that our new techniques outperform the leading contender for modeling human behavior in security games.
Cite
Text
Yang et al. "Improving Resource Allocation Strategy Against Human Adversaries in Security Games." International Joint Conference on Artificial Intelligence, 2011. doi:10.5591/978-1-57735-516-8/IJCAI11-084Markdown
[Yang et al. "Improving Resource Allocation Strategy Against Human Adversaries in Security Games." International Joint Conference on Artificial Intelligence, 2011.](https://mlanthology.org/ijcai/2011/yang2011ijcai-improving/) doi:10.5591/978-1-57735-516-8/IJCAI11-084BibTeX
@inproceedings{yang2011ijcai-improving,
title = {{Improving Resource Allocation Strategy Against Human Adversaries in Security Games}},
author = {Yang, Rong and Kiekintveld, Christopher and Ordóñez, Fernando and Tambe, Milind and John, Richard},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2011},
pages = {458-464},
doi = {10.5591/978-1-57735-516-8/IJCAI11-084},
url = {https://mlanthology.org/ijcai/2011/yang2011ijcai-improving/}
}