Temporal Difference Models: Model-Free Deep RL for Model-Based Control
Abstract
Model-free reinforcement learning (RL) has been proven to be a powerful, general tool for learning complex behaviors. However, its sample efficiency is often impractically large for solving challenging real-world problems, even for off-policy algorithms such as Q-learning. A limiting factor in classic model-free RL is that the learning signal consists only of scalar rewards, ignoring much of the rich information contained in state transition tuples. Model-based RL uses this information, by training a predictive model, but often does not achieve the same asymptotic performance as model-free RL due to model bias. We introduce temporal difference models (TDMs), a family of goal-conditioned value functions that can be trained with model-free learning and used for model-based control. TDMs combine the benefits of model-free and model-based RL: they leverage the rich information in state transitions to learn very efficiently, while still attaining asymptotic performance that exceeds that of direct model-based RL methods. Our experimental results show that, on a range of continuous control tasks, TDMs provide a substantial improvement in efficiency compared to state-of-the-art model-based and model-free methods.
Cite
Text
Pong et al. "Temporal Difference Models: Model-Free Deep RL for Model-Based Control." International Conference on Learning Representations, 2018.Markdown
[Pong et al. "Temporal Difference Models: Model-Free Deep RL for Model-Based Control." International Conference on Learning Representations, 2018.](https://mlanthology.org/iclr/2018/pong2018iclr-temporal/)BibTeX
@inproceedings{pong2018iclr-temporal,
title = {{Temporal Difference Models: Model-Free Deep RL for Model-Based Control}},
author = {Pong, Vitchyr and Gu, Shixiang and Dalal, Murtaza and Levine, Sergey},
booktitle = {International Conference on Learning Representations},
year = {2018},
url = {https://mlanthology.org/iclr/2018/pong2018iclr-temporal/}
}