Near-Optimal Policy Optimization Algorithms for Learning Adversarial Linear Mixture MDPs

Abstract

Learning Markov decision processes (MDPs) in the presence of the adversary is a challenging problem in reinforcement learning (RL). In this paper, we study RL in episodic MDPs with adversarial reward and full information feedback, where the unknown transition probability function is a linear function of a given feature mapping, and the reward function can change arbitrarily episode by episode. We propose an optimistic policy optimization algorithm POWERS and show that it can achieve $\tilde{O}(dH\sqrt{T})$ regret, where $H$ is the length of the episode, $T$ is the number of interaction with the MDP, and $d$ is the dimension of the feature mapping. Furthermore, we also prove a matching lower bound of $\tilde{\Omega}(dH\sqrt{T})$ up to logarithmic factors. Our key technical contributions are two-fold: (1) a new value function estimator based on importance weighting; and (2) a tighter confidence set for the transition kernel. They together lead to the nearly minimax optimal regret.

Cite

Text

He et al. "Near-Optimal Policy Optimization Algorithms for Learning Adversarial Linear Mixture MDPs." Artificial Intelligence and Statistics, 2022.

Markdown

[He et al. "Near-Optimal Policy Optimization Algorithms for Learning Adversarial Linear Mixture MDPs." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/he2022aistats-nearoptimal/)

BibTeX

@inproceedings{he2022aistats-nearoptimal,
  title     = {{Near-Optimal Policy Optimization Algorithms for Learning Adversarial Linear Mixture MDPs}},
  author    = {He, Jiafan and Zhou, Dongruo and Gu, Quanquan},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2022},
  pages     = {4259-4280},
  volume    = {151},
  url       = {https://mlanthology.org/aistats/2022/he2022aistats-nearoptimal/}
}