The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation
Abstract
Theoretical guarantees in reinforcement learning (RL) are known to suffer multiplicative blow-up factors with respect to the misspecification error of function approximation. Yet, the nature of such approximation factors—especially their optimal form in a given learning problem—is poorly understood. In this paper we study this question in linear off-policy value function estimation, where many open questions remain. We study the approximation factor in a broad spectrum of settings, such as presence vs. absence of state aliasing and full vs. partial coverage of the state space. Our core results include instance-dependent upper bounds on the approximation factors with respect to both the weighted $L_2$-norm (where the weighting is the offline state distribution) and the $L_\infty$ norm. We show that these approximation factors are optimal (in an instance-dependent sense) for a number of these settings. In other cases, we show that the instance-dependent parameters which appear in the upper bounds are necessary, and that the finiteness of either alone cannot guarantee a finite approximation factor even in the limit of infinite data.
Cite
Text
Amortila et al. "The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation." International Conference on Machine Learning, 2023.Markdown
[Amortila et al. "The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/amortila2023icml-optimal/)BibTeX
@inproceedings{amortila2023icml-optimal,
title = {{The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation}},
author = {Amortila, Philip and Jiang, Nan and Szepesvari, Csaba},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {768-790},
volume = {202},
url = {https://mlanthology.org/icml/2023/amortila2023icml-optimal/}
}