Deep Conservative Policy Iteration
Abstract
Conservative Policy Iteration (CPI) is a founding algorithm of Approximate Dynamic Programming (ADP). Its core principle is to stabilize greediness through stochastic mixtures of consecutive policies. It comes with strong theoretical guarantees, and inspired approaches in deep Reinforcement Learning (RL). However, CPI itself has rarely been implemented, never with neural networks, and only experimented on toy problems. In this paper, we show how CPI can be practically combined with deep RL with discrete actions, in an off-policy manner. We also introduce adaptive mixture rates inspired by the theory. We experiment thoroughly the resulting algorithm on the simple Cartpole problem, and validate the proposed method on a representative subset of Atari games. Overall, this work suggests that revisiting classic ADP may lead to improved and more stable deep RL algorithms.
Cite
Text
Vieillard et al. "Deep Conservative Policy Iteration." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I04.6070Markdown
[Vieillard et al. "Deep Conservative Policy Iteration." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/vieillard2020aaai-deep/) doi:10.1609/AAAI.V34I04.6070BibTeX
@inproceedings{vieillard2020aaai-deep,
title = {{Deep Conservative Policy Iteration}},
author = {Vieillard, Nino and Pietquin, Olivier and Geist, Matthieu},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {6070-6077},
doi = {10.1609/AAAI.V34I04.6070},
url = {https://mlanthology.org/aaai/2020/vieillard2020aaai-deep/}
}