Two-Way Deconfounder for Off-Policy Evaluation in Causal Reinforcement Learning
Abstract
This paper studies off-policy evaluation (OPE) in the presence of unmeasured confounders. Inspired by the two-way fixed effects regression model widely used in the panel data literature, we propose a two-way unmeasured confounding assumption to model the system dynamics in causal reinforcement learning and develop a two-way deconfounder algorithm that devises a neural tensor network to simultaneously learn both the unmeasured confounders and the system dynamics, based on which a model-based estimator can be constructed for consistent policy value estimation. We illustrate the effectiveness of the proposed estimator through theoretical results and numerical experiments.
Cite
Text
Yu et al. "Two-Way Deconfounder for Off-Policy Evaluation in Causal Reinforcement Learning." Neural Information Processing Systems, 2024. doi:10.52202/079017-2485Markdown
[Yu et al. "Two-Way Deconfounder for Off-Policy Evaluation in Causal Reinforcement Learning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/yu2024neurips-twoway/) doi:10.52202/079017-2485BibTeX
@inproceedings{yu2024neurips-twoway,
title = {{Two-Way Deconfounder for Off-Policy Evaluation in Causal Reinforcement Learning}},
author = {Yu, Shuguang and Fang, Shuxing and Peng, Ruixin and Qi, Zhengling and Zhou, Fan and Shi, Chengchun},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-2485},
url = {https://mlanthology.org/neurips/2024/yu2024neurips-twoway/}
}