Off-Policy Evaluation Under Nonignorable Missing Data

Abstract

Off-Policy Evaluation (OPE) aims to estimate the value of a target policy using offline data collected from potentially different policies. In real-world applications, however, logged data often suffers from missingness. While OPE has been extensively studied in the literature, a theoretical understanding of how missing data affects OPE results remains unclear. In this paper, we investigate OPE in the presence of monotone missingness and theoretically demonstrate that the value estimates remain unbiased under ignorable missingness but can be biased under nonignorable (informative) missingness. To retain the consistency of value estimation, we propose an inverse probability weighting value estimator and conduct statistical inference to quantify the uncertainty of the estimates. Through a series of numerical experiments, we empirically demonstrate that our proposed estimator yields a more reliable value inference under missing data.

Cite

Text

Wang et al. "Off-Policy Evaluation Under Nonignorable Missing Data." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Wang et al. "Off-Policy Evaluation Under Nonignorable Missing Data." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/wang2025icml-offpolicy/)

BibTeX

@inproceedings{wang2025icml-offpolicy,
  title     = {{Off-Policy Evaluation Under Nonignorable Missing Data}},
  author    = {Wang, Han and Xu, Yang and Lu, Wenbin and Song, Rui},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {65020-65058},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/wang2025icml-offpolicy/}
}