Exponentially Weighted Imitation Learning for Batched Historical Data

Abstract

We consider deep policy learning with only batched historical trajectories. The main challenge of this problem is that the learner no longer has a simulator or ``environment oracle'' as in most reinforcement learning settings. To solve this problem, we propose a monotonic advantage reweighted imitation learning strategy that is applicable to problems with complex nonlinear function approximation and works well with hybrid (discrete and continuous) action space. The method does not rely on the knowledge of the behavior policy, thus can be used to learn from data generated by an unknown policy. Under mild conditions, our algorithm, though surprisingly simple, has a policy improvement bound and outperforms most competing methods empirically. Thorough numerical results are also provided to demonstrate the efficacy of the proposed methodology.

Cite

Text

Wang et al. "Exponentially Weighted Imitation Learning for Batched Historical Data." Neural Information Processing Systems, 2018.

Markdown

[Wang et al. "Exponentially Weighted Imitation Learning for Batched Historical Data." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/wang2018neurips-exponentially/)

BibTeX

@inproceedings{wang2018neurips-exponentially,
  title     = {{Exponentially Weighted Imitation Learning for Batched Historical Data}},
  author    = {Wang, Qing and Xiong, Jiechao and Han, Lei and Sun, Peng and Liu, Han and Zhang, Tong},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {6288-6297},
  url       = {https://mlanthology.org/neurips/2018/wang2018neurips-exponentially/}
}