Robust Reinforcement Learning Under Minimax Regret for Green Security

Abstract

Green security domains feature defenders who plan patrols in the face of uncertainty about the adversarial behavior of poachers, illegal loggers, and illegal fishers. Importantly, the deterrence effect of patrols on adversaries’ future behavior makes patrol planning a sequential decision-making problem. Therefore, we focus on robust sequential patrol planning for green security following the minimax regret criterion, which has not been considered in the literature. We formulate the problem as a game between the defender and nature who controls the parameter values of the adversarial behavior and design an algorithm MIRROR to find a robust policy. MIRROR uses two reinforcement learning–based oracles and solves a restricted game considering limited defender strategies and parameter values. We evaluate MIRROR on real-world poaching data.

Cite

Text

Xu et al. "Robust Reinforcement Learning Under Minimax Regret for Green Security." Uncertainty in Artificial Intelligence, 2021.

Markdown

[Xu et al. "Robust Reinforcement Learning Under Minimax Regret for Green Security." Uncertainty in Artificial Intelligence, 2021.](https://mlanthology.org/uai/2021/xu2021uai-robust/)

BibTeX

@inproceedings{xu2021uai-robust,
  title     = {{Robust Reinforcement Learning Under Minimax Regret for Green Security}},
  author    = {Xu, Lily and Perrault, Andrew and Fang, Fei and Chen, Haipeng and Tambe, Milind},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2021},
  pages     = {257-267},
  volume    = {161},
  url       = {https://mlanthology.org/uai/2021/xu2021uai-robust/}
}