Batch Value-Function Approximation with Only Realizability
Abstract
We make progress in a long-standing problem of batch reinforcement learning (RL): learning Q* from an exploratory and polynomial-sized dataset, using a realizable and otherwise arbitrary function class. In fact, all existing algorithms demand function-approximation assumptions stronger than realizability, and the mounting negative evidence has led to a conjecture that sample-efficient learning is impossible in this setting (Chen & Jiang, 2019). Our algorithm, BVFT, breaks the hardness conjecture (albeit under a stronger notion of exploratory data) via a tournament procedure that reduces the learning problem to pairwise comparison, and solves the latter with the help of a state-action-space partition constructed from the compared functions. We also discuss how BVFT can be applied to model selection among other extensions and open problems.
Cite
Text
Xie and Jiang. "Batch Value-Function Approximation with Only Realizability." International Conference on Machine Learning, 2021.Markdown
[Xie and Jiang. "Batch Value-Function Approximation with Only Realizability." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/xie2021icml-batch/)BibTeX
@inproceedings{xie2021icml-batch,
title = {{Batch Value-Function Approximation with Only Realizability}},
author = {Xie, Tengyang and Jiang, Nan},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {11404-11413},
volume = {139},
url = {https://mlanthology.org/icml/2021/xie2021icml-batch/}
}