Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning
Abstract
Transformers have demonstrated exceptional performance across a wide range of domains. While their ability to perform reinforcement learning in-context has been established both theoretically and empirically, their behavior in non-stationary environments remains less understood. In this study, we address this gap by showing that transformers can achieve nearly optimal dynamic regret bounds in non-stationary settings. We prove that transformers are capable of approximating strategies used to handle non-stationary environment, and can learn the approximator in the in-context learning setup. Our experiments further show that transformers can match or even outperform existing expert algorithms in such environments.
Cite
Text
Chen et al. "Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning." Advances in Neural Information Processing Systems, 2025.Markdown
[Chen et al. "Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/chen2025neurips-optimal/)BibTeX
@inproceedings{chen2025neurips-optimal,
title = {{Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning}},
author = {Chen, Baiyuan and Ito, Shinji and Imaizumi, Masaaki},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/chen2025neurips-optimal/}
}