Off-Policy Evaluation for Episodic Partially Observable Markov Decision Processes Under Non-Parametric Models

Abstract

We study the problem of off-policy evaluation (OPE) for episodic Partially Observable Markov Decision Processes (POMDPs) with continuous states. Motivated by the recently proposed proximal causal inference framework, we develop a non-parametric identification result for estimating the policy value via a sequence of so-called V-bridge functions with the help of time-dependent proxy variables. We then develop a fitted-Q-evaluation-type algorithm to estimate V-bridge functions recursively, where a non-parametric instrumental variable (NPIV) problem is solved at each step. By analyzing this challenging sequential NPIV estimation, we establish the finite-sample error bounds for estimating the V-bridge functions and accordingly that for evaluating the policy value, in terms of the sample size, length of horizon and so-called (local) measure of ill-posedness at each step. To the best of our knowledge, this is the first finite-sample error bound for OPE in POMDPs under non-parametric models.

Cite

Text

Miao et al. "Off-Policy Evaluation for Episodic Partially Observable Markov Decision Processes Under Non-Parametric Models." Neural Information Processing Systems, 2022.

Markdown

[Miao et al. "Off-Policy Evaluation for Episodic Partially Observable Markov Decision Processes Under Non-Parametric Models." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/miao2022neurips-offpolicy/)

BibTeX

@inproceedings{miao2022neurips-offpolicy,
  title     = {{Off-Policy Evaluation for Episodic Partially Observable Markov Decision Processes Under Non-Parametric Models}},
  author    = {Miao, Rui and Qi, Zhengling and Zhang, Xiaoke},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/miao2022neurips-offpolicy/}
}