Near-Optimal Regret for Adversarial MDP with Delayed Bandit Feedback

Abstract

The standard assumption in reinforcement learning (RL) is that agents observe feedback for their actions immediately. However, in practice feedback is often observed in delay. This paper studies online learning in episodic Markov decision process (MDP) with unknown transitions, adversarially changing costs, and unrestricted delayed bandit feedback. More precisely, the feedback for the agent in episode $k$ is revealed only in the end of episode $k + d^k$, where the delay $d^k$ can be changing over episodes and chosen by an oblivious adversary. We present the first algorithms that achieve near-optimal $\sqrt{K + D}$ regret, where $K$ is the number of episodes and $D = \sum_{k=1}^K d^k$ is the total delay, significantly improving upon the best known regret bound of $(K + D)^{2/3}$.

Cite

Text

Jin et al. "Near-Optimal Regret for Adversarial MDP with Delayed Bandit Feedback." Neural Information Processing Systems, 2022.

Markdown

[Jin et al. "Near-Optimal Regret for Adversarial MDP with Delayed Bandit Feedback." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/jin2022neurips-nearoptimal/)

BibTeX

@inproceedings{jin2022neurips-nearoptimal,
  title     = {{Near-Optimal Regret for Adversarial MDP with Delayed Bandit Feedback}},
  author    = {Jin, Tiancheng and Lancewicki, Tal and Luo, Haipeng and Mansour, Yishay and Rosenberg, Aviv},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/jin2022neurips-nearoptimal/}
}