Online Decision Transformer
Abstract
Recent work has shown that offline reinforcement learning (RL) can be formulated as a sequence modeling problem (Chen et al., 2021; Janner et al., 2021) and solved via approaches similar to large-scale language modeling. However, any practical instantiation of RL also involves an online component, where policies pretrained on passive offline datasets are finetuned via task-specific interactions with the environment. We propose Online Decision Transformers (ODT), an RL algorithm based on sequence modeling that blends offline pretraining with online finetuning in a unified framework. Our framework uses sequence-level entropy regularizers in conjunction with autoregressive modeling objectives for sample-efficient exploration and finetuning. Empirically, we show that ODT is competitive with the state-of-the-art in absolute performance on the D4RL benchmark but shows much more significant gains during the finetuning procedure.
Cite
Text
Zheng et al. "Online Decision Transformer." International Conference on Machine Learning, 2022.Markdown
[Zheng et al. "Online Decision Transformer." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/zheng2022icml-online/)BibTeX
@inproceedings{zheng2022icml-online,
title = {{Online Decision Transformer}},
author = {Zheng, Qinqing and Zhang, Amy and Grover, Aditya},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {27042-27059},
volume = {162},
url = {https://mlanthology.org/icml/2022/zheng2022icml-online/}
}