On Oracle-Efficient PAC RL with Rich Observations
Abstract
We study the computational tractability of PAC reinforcement learning with rich observations. We present new provably sample-efficient algorithms for environments with deterministic hidden state dynamics and stochastic rich observations. These methods operate in an oracle model of computation -- accessing policy and value function classes exclusively through standard optimization primitives -- and therefore represent computationally efficient alternatives to prior algorithms that require enumeration. With stochastic hidden state dynamics, we prove that the only known sample-efficient algorithm, OLIVE, cannot be implemented in the oracle model. We also present several examples that illustrate fundamental challenges of tractable PAC reinforcement learning in such general settings.
Cite
Text
Dann et al. "On Oracle-Efficient PAC RL with Rich Observations." Neural Information Processing Systems, 2018.Markdown
[Dann et al. "On Oracle-Efficient PAC RL with Rich Observations." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/dann2018neurips-oracleefficient/)BibTeX
@inproceedings{dann2018neurips-oracleefficient,
title = {{On Oracle-Efficient PAC RL with Rich Observations}},
author = {Dann, Christoph and Jiang, Nan and Krishnamurthy, Akshay and Agarwal, Alekh and Langford, John and Schapire, Robert E.},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {1422-1432},
url = {https://mlanthology.org/neurips/2018/dann2018neurips-oracleefficient/}
}