DoMo-AC: Doubly Multi-Step Off-Policy Actor-Critic Algorithm
Abstract
Multi-step learning applies lookahead over multiple time steps and has proved valuable in policy evaluation settings. However, in the optimal control case, the impact of multi-step learning has been relatively limited despite a number of prior efforts. Fundamentally, this might be because multi-step policy improvements require operations that cannot be approximated by stochastic samples, hence hindering the widespread adoption of such methods in practice. To address such limitations, we introduce doubly multi-step off-policy VI (DoMo-VI), a novel oracle algorithm that combines multi-step policy improvements and policy evaluations. DoMo-VI enjoys guaranteed convergence speed-up to the optimal policy and is applicable in general off-policy learning settings. We then propose doubly multi-step off-policy actor-critic (DoMo-AC), a practical instantiation of the DoMo-VI algorithm. DoMo-AC introduces a bias-variance trade-off that ensures improved policy gradient estimates. When combined with the IMPALA architecture, DoMo-AC has showed improvements over the baseline algorithm on Atari-57 game benchmarks.
Cite
Text
Tang et al. "DoMo-AC: Doubly Multi-Step Off-Policy Actor-Critic Algorithm." International Conference on Machine Learning, 2023.Markdown
[Tang et al. "DoMo-AC: Doubly Multi-Step Off-Policy Actor-Critic Algorithm." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/tang2023icml-domoac/)BibTeX
@inproceedings{tang2023icml-domoac,
title = {{DoMo-AC: Doubly Multi-Step Off-Policy Actor-Critic Algorithm}},
author = {Tang, Yunhao and Kozuno, Tadashi and Rowland, Mark and Harutyunyan, Anna and Munos, Remi and Avila Pires, Bernardo and Valko, Michal},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {33657-33673},
volume = {202},
url = {https://mlanthology.org/icml/2023/tang2023icml-domoac/}
}