Planning in the Presence of Cost Functions Controlled by an Adversary
Abstract
We investigate methods for planning in a Markov Decision Process where the cost function is chosen by an adversary after we fix our policy. As a running example, we consider a robot path planning problem where costs are in uenced by sensors that an adversary places in the environment. We formulate the problem as a zero-sum matrix game where rows correspond to deterministic policies for the planning player and columns correspond to cost vectors the adversary can select. For a fixed cost vector, fast algorithms (such as value iteration) are available for solving MDPs. We develop efficient algorithms for matrix games where such best response oracles exist. We show that for our path planning problem these algorithms are at least an order of magnitude faster than direct solution of the linear programming formulation. ICML Proceedings of the Twentieth International Conference on Machine Learning
Cite
Text
McMahan et al. "Planning in the Presence of Cost Functions Controlled by an Adversary." International Conference on Machine Learning, 2003.Markdown
[McMahan et al. "Planning in the Presence of Cost Functions Controlled by an Adversary." International Conference on Machine Learning, 2003.](https://mlanthology.org/icml/2003/mcmahan2003icml-planning/)BibTeX
@inproceedings{mcmahan2003icml-planning,
title = {{Planning in the Presence of Cost Functions Controlled by an Adversary}},
author = {McMahan, H. Brendan and Gordon, Geoffrey J. and Blum, Avrim},
booktitle = {International Conference on Machine Learning},
year = {2003},
pages = {536-543},
url = {https://mlanthology.org/icml/2003/mcmahan2003icml-planning/}
}