Return-Aligned Decision Transformer
Abstract
Traditional approaches in offline reinforcement learning aim to learn the optimal policy that maximizes the cumulative reward, also known as return. It is increasingly important to adjust the performance of AI agents to meet human requirements, for example, in applications like video games and education tools. Decision Transformer (DT) optimizes a policy that generates actions conditioned on the target return through supervised learning and includes a mechanism to control the agent's performance using the target return. However, the action generation is hardly influenced by the target return because DT’s self-attention allocates scarce attention scores to the return tokens. In this paper, we propose Return-Aligned Decision Transformer (RADT), designed to more effectively align the actual return with the target return. RADT leverages features extracted by paying attention solely to the return, enabling action generation to consistently depend on the target return. Extensive experiments show that RADT significantly reduces the discrepancies between the actual return and the target return compared to DT-based methods.
Cite
Text
Tanaka et al. "Return-Aligned Decision Transformer." Transactions on Machine Learning Research, 2025.Markdown
[Tanaka et al. "Return-Aligned Decision Transformer." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/tanaka2025tmlr-returnaligned/)BibTeX
@article{tanaka2025tmlr-returnaligned,
title = {{Return-Aligned Decision Transformer}},
author = {Tanaka, Tsunehiko and Abe, Kenshi and Ariu, Kaito and Morimura, Tetsuro and Simo-Serra, Edgar},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/tanaka2025tmlr-returnaligned/}
}