Q(λ) with Off-Policy Corrections

Abstract

We propose and analyze an alternate approach to off-policy multi-step temporal difference learning, in which off-policy returns are corrected with the current Q-function in terms of rewards, rather than with the target policy in terms of transition probabilities. We prove that such approximate corrections are sufficient for off-policy convergence both in policy evaluation and control, provided certain conditions. These conditions relate the distance between the target and behavior policies, the eligibility trace parameter and the discount factor, and formalize an underlying tradeoff in off-policy TD($\lambda$). We illustrate this theoretical relationship empirically on a continuous-state control task.

Cite

Text

Harutyunyan et al. "Q(λ) with Off-Policy Corrections." International Conference on Algorithmic Learning Theory, 2016. doi:10.1007/978-3-319-46379-7_21

Markdown

[Harutyunyan et al. "Q(λ) with Off-Policy Corrections." International Conference on Algorithmic Learning Theory, 2016.](https://mlanthology.org/alt/2016/harutyunyan2016alt-offpolicy/) doi:10.1007/978-3-319-46379-7_21

BibTeX

@inproceedings{harutyunyan2016alt-offpolicy,
  title     = {{Q(λ) with Off-Policy Corrections}},
  author    = {Harutyunyan, Anna and Bellemare, Marc G. and Stepleton, Tom and Munos, Rémi},
  booktitle = {International Conference on Algorithmic Learning Theory},
  year      = {2016},
  pages     = {305-320},
  doi       = {10.1007/978-3-319-46379-7_21},
  url       = {https://mlanthology.org/alt/2016/harutyunyan2016alt-offpolicy/}
}