Linear Off-Policy Actor-Critic
Abstract
This paper presents the first actor-critic algorithm for off-policy reinforcement learning. Our algorithm is online and incremental, and its per-time-step complexity scales linearly with the number of learned weights. Previous work on actor-critic algorithms is limited to the on-policy setting and does not take advantage of the recent advances in off-policy gradient temporal-difference learning. Off-policy techniques, such as Greedy-GQ, enable a target policy to be learned while following and obtaining data from another (behavior) policy. For many problems, however, actor-critic methods are more practical than action value methods (like Greedy-GQ) because they explicitly represent the policy; consequently, the policy can be stochastic and utilize a large action space. In this paper, we illustrate how to practically combine the generality and learning potential of off-policy learning with the flexibility in action selection given by actor-critic methods. We derive an incremental, linear time and space complexity algorithm that includes eligibility traces, prove convergence under assumptions similar to previous off-policy algorithms, and empirically show better or comparable performance to existing algorithms on standard reinforcement-learning benchmark problems.
Cite
Text
Degris et al. "Linear Off-Policy Actor-Critic." International Conference on Machine Learning, 2012.Markdown
[Degris et al. "Linear Off-Policy Actor-Critic." International Conference on Machine Learning, 2012.](https://mlanthology.org/icml/2012/degris2012icml-linear/)BibTeX
@inproceedings{degris2012icml-linear,
title = {{Linear Off-Policy Actor-Critic}},
author = {Degris, Thomas and White, Martha and Sutton, Richard S.},
booktitle = {International Conference on Machine Learning},
year = {2012},
url = {https://mlanthology.org/icml/2012/degris2012icml-linear/}
}