Deterministic MDPs with Adversarial Rewards and Bandit Feedback
Abstract
We consider a Markov decision process with deterministic state transition dynamics, adversarially generated rewards that change arbitrarily from round to round, and a bandit feedback model in which the decision maker only observes the rewards it receives. In this setting, we present a novel and efficient online decision making algorithm named MarcoPolo. Under mild assumptions on the structure of the transition dynamics, we prove that MarcoPolo enjoys a regret of O(T3/4 √log T) against the best deterministic policy in hindsight. Specifically, our analysis does not rely on the stringent unichain assumption, which dominates much of the previous work on this topic.
Cite
Text
Arora et al. "Deterministic MDPs with Adversarial Rewards and Bandit Feedback." Conference on Uncertainty in Artificial Intelligence, 2012.Markdown
[Arora et al. "Deterministic MDPs with Adversarial Rewards and Bandit Feedback." Conference on Uncertainty in Artificial Intelligence, 2012.](https://mlanthology.org/uai/2012/arora2012uai-deterministic/)BibTeX
@inproceedings{arora2012uai-deterministic,
title = {{Deterministic MDPs with Adversarial Rewards and Bandit Feedback}},
author = {Arora, Raman and Dekel, Ofer and Tewari, Ambuj},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2012},
pages = {93-101},
url = {https://mlanthology.org/uai/2012/arora2012uai-deterministic/}
}