Finding Good Policies in Average-Reward Markov Decision Processes Without Prior Knowledge

Abstract

We revisit the identification of an $\varepsilon$-optimal policy in average-reward Markov Decision Processes (MDP). In such MDPs, two measures of complexity have appeared in the literature: the diameter, $D$, and the optimal bias span, $H$, which satisfy $H\leq D$. Prior work have studied the complexity of $\varepsilon$-optimal policy identification only when a generative model is available. In this case, it is known that there exists an MDP with $D \simeq H$ for which the sample complexity to output an $\varepsilon$-optimal policy is $\Omega(SAD/\varepsilon^2)$ where $S$ and $A$ are the sizes of the state and action spaces. Recently, an algorithm with a sample complexity of order $SAH/\varepsilon^2$ has been proposed, but it requires the knowledge of $H$. We first show that the sample complexity required to estimate $H$ is not bounded by any function of $S,A$ and $H$, ruling out the possibility to easily make the previous algorithm agnostic to $H$. By relying instead on a diameter estimation procedure, we propose the first algorithm for $(\varepsilon,\delta)$-PAC policy identification that does not need any form of prior knowledge on the MDP. Its sample complexity scales in $SAD/\varepsilon^2$ in the regime of small $\varepsilon$, which is near-optimal. In the online setting, our first contribution is a lower bound which implies that a sample complexity polynomial in $H$ cannot be achieved in this setting. Then, we propose an online algorithm with a sample complexity in $SAD^2/\varepsilon^2$, as well as a novel approach based on a data-dependent stopping rule that we believe is promising to further reduce this bound.

Cite

Text

Tuynman et al. "Finding Good Policies in Average-Reward Markov Decision Processes Without Prior Knowledge." Neural Information Processing Systems, 2024. doi:10.52202/079017-3489

Markdown

[Tuynman et al. "Finding Good Policies in Average-Reward Markov Decision Processes Without Prior Knowledge." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/tuynman2024neurips-finding/) doi:10.52202/079017-3489

BibTeX

@inproceedings{tuynman2024neurips-finding,
  title     = {{Finding Good Policies in Average-Reward Markov Decision Processes Without Prior Knowledge}},
  author    = {Tuynman, Adrienne and Degenne, Rémy and Kaufmann, Emilie},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-3489},
  url       = {https://mlanthology.org/neurips/2024/tuynman2024neurips-finding/}
}