Average-Reward Off-Policy Policy Evaluation with Function Approximation
Abstract
We consider off-policy policy evaluation with function approximation (FA) in average-reward MDPs, where the goal is to estimate both the reward rate and the differential value function. For this problem, bootstrapping is necessary and, along with off-policy learning and FA, results in the deadly triad (Sutton & Barto, 2018). To address the deadly triad, we propose two novel algorithms, reproducing the celebrated success of Gradient TD algorithms in the average-reward setting. In terms of estimating the differential value function, the algorithms are the first convergent off-policy linear function approximation algorithms. In terms of estimating the reward rate, the algorithms are the first convergent off-policy linear function approximation algorithms that do not require estimating the density ratio. We demonstrate empirically the advantage of the proposed algorithms, as well as their nonlinear variants, over a competitive density-ratio-based approach, in a simple domain as well as challenging robot simulation tasks.
Cite
Text
Zhang et al. "Average-Reward Off-Policy Policy Evaluation with Function Approximation." International Conference on Machine Learning, 2021.Markdown
[Zhang et al. "Average-Reward Off-Policy Policy Evaluation with Function Approximation." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/zhang2021icml-averagereward/)BibTeX
@inproceedings{zhang2021icml-averagereward,
title = {{Average-Reward Off-Policy Policy Evaluation with Function Approximation}},
author = {Zhang, Shangtong and Wan, Yi and Sutton, Richard S and Whiteson, Shimon},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {12578-12588},
volume = {139},
url = {https://mlanthology.org/icml/2021/zhang2021icml-averagereward/}
}