High-Confidence Off-Policy (or Counterfactual) Variance Estimation
Abstract
Many sequential decision-making systems leverage data collected using prior policies to propose a new policy. For critical applications, it is important that high-confidence guarantees on the new policy’s behavior are provided before deployment, to ensure that the policy will behave as desired. Prior works have studied high-confidence off-policy estimation of the expected return, however, high-confidence off-policy estimation of the variance of returns can be equally critical for high-risk applications. In this paper we tackle the previously open problem of estimating and bounding, with high confidence, the variance of returns from off-policy data.
Cite
Text
Chandak et al. "High-Confidence Off-Policy (or Counterfactual) Variance Estimation." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I8.16855Markdown
[Chandak et al. "High-Confidence Off-Policy (or Counterfactual) Variance Estimation." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/chandak2021aaai-high/) doi:10.1609/AAAI.V35I8.16855BibTeX
@inproceedings{chandak2021aaai-high,
title = {{High-Confidence Off-Policy (or Counterfactual) Variance Estimation}},
author = {Chandak, Yash and Shankar, Shiv and Thomas, Philip S.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {6939-6947},
doi = {10.1609/AAAI.V35I8.16855},
url = {https://mlanthology.org/aaai/2021/chandak2021aaai-high/}
}