Bootstrapping Fitted Q-Evaluation for Off-Policy Inference

Abstract

Bootstrapping provides a flexible and effective approach for assessing the quality of batch reinforcement learning, yet its theoretical properties are poorly understood. In this paper, we study the use of bootstrapping in off-policy evaluation (OPE), and in particular, we focus on the fitted Q-evaluation (FQE) that is known to be minimax-optimal in the tabular and linear-model cases. We propose a bootstrapping FQE method for inferring the distribution of the policy evaluation error and show that this method is asymptotically efficient and distributionally consistent for off-policy statistical inference. To overcome the computation limit of bootstrapping, we further adapt a subsampling procedure that improves the runtime by an order of magnitude. We numerically evaluate the bootrapping method in classical RL environments for confidence interval estimation, estimating the variance of off-policy evaluator, and estimating the correlation between multiple off-policy evaluators.

Cite

Text

Hao et al. "Bootstrapping Fitted Q-Evaluation for Off-Policy Inference." International Conference on Machine Learning, 2021.

Markdown

[Hao et al. "Bootstrapping Fitted Q-Evaluation for Off-Policy Inference." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/hao2021icml-bootstrapping/)

BibTeX

@inproceedings{hao2021icml-bootstrapping,
  title     = {{Bootstrapping Fitted Q-Evaluation for Off-Policy Inference}},
  author    = {Hao, Botao and Ji, Xiang and Duan, Yaqi and Lu, Hao and Szepesvari, Csaba and Wang, Mengdi},
  booktitle = {International Conference on Machine Learning},
  year      = {2021},
  pages     = {4074-4084},
  volume    = {139},
  url       = {https://mlanthology.org/icml/2021/hao2021icml-bootstrapping/}
}