Regret-Based Optimization and Preference Elicitation for Stackelberg Security Games with Uncertainty
Abstract
Stackelberg security games (SSGs) have been deployed in a number of real-world domains. One key challenge in these applications is the assessment of attacker payoffs, which may not be perfectly known. Previous work has studied SSGs with uncertain payoffs modeled by interval uncertainty and provided maximin-based robust solutions. In contrast, in this work we propose the use of the less conservative minimax regret decision criterion for such payoff-uncertain SSGs and present the first algorithms for computing minimax regret for SSGs. We also address the challenge of preference elicitation, using minimax regret to develop the first elicitation strategies for SSGs. Experimental results validate the effectiveness of our approaches.
Cite
Text
Nguyen et al. "Regret-Based Optimization and Preference Elicitation for Stackelberg Security Games with Uncertainty." AAAI Conference on Artificial Intelligence, 2014. doi:10.1609/AAAI.V28I1.8804Markdown
[Nguyen et al. "Regret-Based Optimization and Preference Elicitation for Stackelberg Security Games with Uncertainty." AAAI Conference on Artificial Intelligence, 2014.](https://mlanthology.org/aaai/2014/nguyen2014aaai-regret/) doi:10.1609/AAAI.V28I1.8804BibTeX
@inproceedings{nguyen2014aaai-regret,
title = {{Regret-Based Optimization and Preference Elicitation for Stackelberg Security Games with Uncertainty}},
author = {Nguyen, Thanh Hong and Yadav, Amulya and An, Bo and Tambe, Milind and Boutilier, Craig},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2014},
pages = {756-762},
doi = {10.1609/AAAI.V28I1.8804},
url = {https://mlanthology.org/aaai/2014/nguyen2014aaai-regret/}
}