Transformers Can Learn Temporal Difference Methods for In-Context Reinforcement Learning
Abstract
Traditionally, reinforcement learning (RL) agents learn to solve new tasks by updating their neural network parameters through interactions with the task environment. However, recent works demonstrate that some RL agents, after certain pretraining procedures, can learn to solve unseen new tasks without parameter updates, a phenomenon known as in-context reinforcement learning (ICRL). The empirical success of ICRL is widely attributed to the hypothesis that the forward pass of the pretrained agent neural network implements an RL algorithm. In this paper, we support this hypothesis by showing, both empirically and theoretically, that when a transformer is trained for policy evaluation tasks, it can discover and learn to implement temporal difference learning in its forward pass.
Cite
Text
Wang et al. "Transformers Can Learn Temporal Difference Methods for In-Context Reinforcement Learning." International Conference on Learning Representations, 2025.Markdown
[Wang et al. "Transformers Can Learn Temporal Difference Methods for In-Context Reinforcement Learning." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/wang2025iclr-transformers/)BibTeX
@inproceedings{wang2025iclr-transformers,
title = {{Transformers Can Learn Temporal Difference Methods for In-Context Reinforcement Learning}},
author = {Wang, Jiuqi and Blaser, Ethan and Daneshmand, Hadi and Zhang, Shangtong},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/wang2025iclr-transformers/}
}