Learning Adversarial Markov Decision Processes with Bandit Feedback and Unknown Transition

Abstract

We consider the task of learning in episodic finite-horizon Markov decision processes with an unknown transition function, bandit feedback, and adversarial losses. We propose an efficient algorithm that achieves $\mathcal{\tilde{O}}(L|X|\sqrt{|A|T})$ regret with high probability, where $L$ is the horizon, $|X|$ the number of states, $|A|$ the number of actions, and T the number of episodes. To our knowledge, our algorithm is the first to ensure $\mathcal{\tilde{O}}(\sqrt{T})$ regret in this challenging setting; in fact, it achieves the same regret as (Rosenberg & Mansour, 2019a) who consider the easier setting with full-information. Our key contributions are two-fold: a tighter confidence set for the transition function; and an optimistic loss estimator that is inversely weighted by an "upper occupancy bound".

Cite

Text

Jin et al. "Learning Adversarial Markov Decision Processes with Bandit Feedback and Unknown Transition." International Conference on Machine Learning, 2020.

Markdown

[Jin et al. "Learning Adversarial Markov Decision Processes with Bandit Feedback and Unknown Transition." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/jin2020icml-learning/)

BibTeX

@inproceedings{jin2020icml-learning,
  title     = {{Learning Adversarial Markov Decision Processes with Bandit Feedback and Unknown Transition}},
  author    = {Jin, Chi and Jin, Tiancheng and Luo, Haipeng and Sra, Suvrit and Yu, Tiancheng},
  booktitle = {International Conference on Machine Learning},
  year      = {2020},
  pages     = {4860-4869},
  volume    = {119},
  url       = {https://mlanthology.org/icml/2020/jin2020icml-learning/}
}