Accelerating Transformers in Online RL
Abstract
The appearance of transformer-based models in Reinforcement Learning (RL) has expanded the horizons of possibilities in robotics tasks, but it has simultaneously brought a wide range of challenges during its implementation, especially in model-free online RL. Some of the existing learning algorithms cannot be easily implemented with transformer-based models due to the instability of the latter. In this paper, we propose a method that uses the Accelerator agent as a transformer's trainer. The Accelerator, a simpler and more stable model, interacts with the environment independently while simultaneously training the transformer through behavior cloning during the first stage of the proposed algorithm. In the second stage, the pretrained transformer starts to interact with the environment in a fully online setting. As a result, this algorithm accelerates the transformer in terms of its performance and helps it to train online more stably.
Cite
Text
Zelezetsky et al. "Accelerating Transformers in Online RL." ICLR 2025 Workshops: WRL, 2025.Markdown
[Zelezetsky et al. "Accelerating Transformers in Online RL." ICLR 2025 Workshops: WRL, 2025.](https://mlanthology.org/iclrw/2025/zelezetsky2025iclrw-accelerating/)BibTeX
@inproceedings{zelezetsky2025iclrw-accelerating,
title = {{Accelerating Transformers in Online RL}},
author = {Zelezetsky, Daniil and Kovalev, Alexey and Panov, Aleksandr},
booktitle = {ICLR 2025 Workshops: WRL},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/zelezetsky2025iclrw-accelerating/}
}