Towards Data-Driven Offline Simulations for Online Reinforcement Learning

Abstract

Modern decision-making systems, from robots to web recommendation engines, are expected to adapt: to user preferences, changing circumstances or even new tasks. Yet, it is still uncommon to deploy a dynamically learning agent (rather than a fixed policy) to a production system, as it's perceived as unsafe. Using historical data to reason about learning algorithms, similar to offline policy evaluation (OPE) applied to fixed policies, could help practitioners evaluate and ultimately deploy such adaptive agents to production. In this work, we formalize offline learner simulation (OLS) for reinforcement learning (RL) and propose a novel evaluation protocol that measures both fidelity and efficiency. For environments with complex high-dimensional observations, we propose a semi-parametric approach that leverages recent advances in latent state discovery. In preliminary experiments, we show the advantage of our approach compared to fully non-parametric baselines. The code to reproduce these experiments will be made available at https://github.com/microsoft/rl-offline-simulation.

Cite

Text

Tang et al. "Towards Data-Driven Offline Simulations for Online Reinforcement Learning." NeurIPS 2022 Workshops: Offline_RL, 2022.

Markdown

[Tang et al. "Towards Data-Driven Offline Simulations for Online Reinforcement Learning." NeurIPS 2022 Workshops: Offline_RL, 2022.](https://mlanthology.org/neuripsw/2022/tang2022neuripsw-datadriven/)

BibTeX

@inproceedings{tang2022neuripsw-datadriven,
  title     = {{Towards Data-Driven Offline Simulations for Online Reinforcement Learning}},
  author    = {Tang, Shengpu and Frujeri, Felipe Vieira and Misra, Dipendra and Lamb, Alex and Langford, John and Mineiro, Paul and Kochman, Sebastian},
  booktitle = {NeurIPS 2022 Workshops: Offline_RL},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/tang2022neuripsw-datadriven/}
}