Transfer of Samples in Policy Search via Multiple Importance Sampling

Abstract

We consider the transfer of experience samples in reinforcement learning. Most of the previous works in this context focused on value-based settings, where transferring instances conveniently reduces to the transfer of (s,a,s’,r) tuples. In this paper, we consider the more complex case of reusing samples in policy search methods, in which the agent is required to transfer entire trajectories between environments with different transition models. By leveraging ideas from multiple importance sampling, we propose robust gradient estimators that effectively achieve this goal, along with several techniques to reduce their variance. In the case where the transition models are known, we theoretically establish the robustness to the negative transfer for our estimators. In the case of unknown models, we propose a method to efficiently estimate them when the target task belongs to a finite set of possible tasks and when it belongs to some reproducing kernel Hilbert space. We provide empirical results to show the effectiveness of our estimators.

Cite

Text

Tirinzoni et al. "Transfer of Samples in Policy Search via Multiple Importance Sampling." International Conference on Machine Learning, 2019.

Markdown

[Tirinzoni et al. "Transfer of Samples in Policy Search via Multiple Importance Sampling." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/tirinzoni2019icml-transfer/)

BibTeX

@inproceedings{tirinzoni2019icml-transfer,
  title     = {{Transfer of Samples in Policy Search via Multiple Importance Sampling}},
  author    = {Tirinzoni, Andrea and Salvini, Mattia and Restelli, Marcello},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {6264-6274},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/tirinzoni2019icml-transfer/}
}