Numerical Goal-Based Transformers for Practical Conditions
Abstract
Goal-conditioned reinforcement learning (GCRL) studies aim to apply trained agents in realistic environments. In particular, offline reinforcement learning is being studied as a way to reduce the cost of online interactions in GCRL. One such method is Decision Transformer (DT), which utilizes a numerical goal called "return-to-go" for superior performance. Since DT assumes an idealized environment, such as perfect knowledge of rewards, it is necessary to study an improved approach for real-world applications. In this work, we present various attempts and results for numerical goal-based transformers to operate under practical conditions.
Cite
Text
Kim et al. "Numerical Goal-Based Transformers for Practical Conditions." NeurIPS 2023 Workshops: GCRL, 2023.Markdown
[Kim et al. "Numerical Goal-Based Transformers for Practical Conditions." NeurIPS 2023 Workshops: GCRL, 2023.](https://mlanthology.org/neuripsw/2023/kim2023neuripsw-numerical/)BibTeX
@inproceedings{kim2023neuripsw-numerical,
title = {{Numerical Goal-Based Transformers for Practical Conditions}},
author = {Kim, Seonghyun and Noh, Samyeul and Jang, Ingook},
booktitle = {NeurIPS 2023 Workshops: GCRL},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/kim2023neuripsw-numerical/}
}