Demystifying the Paradox of Importance Sampling with an Estimated History-Dependent Behavior Policy in Off-Policy Evaluation
Abstract
This paper studies off-policy evaluation (OPE) in reinforcement learning with a focus on behavior policy estimation for importance sampling. Prior work has shown empirically that estimating a history-dependent behavior policy can lead to lower mean squared error (MSE) even when the true behavior policy is Markovian. However, the question of why the use of history should lower MSE remains open. In this paper, we theoretically demystify this paradox by deriving a bias-variance decomposition of the MSE of ordinary importance sampling (IS) estimators, demonstrating that history-dependent behavior policy estimation decreases their asymptotic variances while increasing their finite-sample biases. Additionally, as the estimated behavior policy conditions on a longer history, we show a consistent decrease in variance. We extend these findings to a range of other OPE estimators, including the sequential IS estimator, the doubly robust estimator and the marginalized IS estimator, with the behavior policy estimated either parametrically or non-parametrically.
Cite
Text
Zhou et al. "Demystifying the Paradox of Importance Sampling with an Estimated History-Dependent Behavior Policy in Off-Policy Evaluation." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Zhou et al. "Demystifying the Paradox of Importance Sampling with an Estimated History-Dependent Behavior Policy in Off-Policy Evaluation." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/zhou2025icml-demystifying/)BibTeX
@inproceedings{zhou2025icml-demystifying,
title = {{Demystifying the Paradox of Importance Sampling with an Estimated History-Dependent Behavior Policy in Off-Policy Evaluation}},
author = {Zhou, Hongyi and Hanna, Josiah P. and Zhu, Jin and Yang, Ying and Shi, Chengchun},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {78759-78785},
volume = {267},
url = {https://mlanthology.org/icml/2025/zhou2025icml-demystifying/}
}