Skill Decision Transformer

Abstract

Recent work has shown that Large Language Models (LLMs) can be incredibly effective for offline reinforcement learning (RL) by representing the traditional RL problem as a sequence modelling problem. However many of these methods only optimize for high returns, and may not extract much information from a diverse dataset of trajectories. Generalized Decision Transformers (GDTs) have shown that by utilizing future trajectory information, in the form of information statistics, can help extract more information from offline trajectory data. Building upon this, we propose Skill Decision Transformer (Skill DT). Skill DT draws inspiration from hindsight relabelling and skill discovery methods to discover a diverse set of \emph{primitive behaviors}, or skills. We show that Skill DT can not only perform offline state-marginal matching (SMM), but can discovery descriptive behaviors that can be easily sampled. Furthermore, we show that through purely reward-free optimization, Skill DT is still competitive with supervised offline RL approaches on the D4RL benchmark.

Cite

Text

Sudhakaran and Risi. "Skill Decision Transformer." NeurIPS 2022 Workshops: FMDM, 2022.

Markdown

[Sudhakaran and Risi. "Skill Decision Transformer." NeurIPS 2022 Workshops: FMDM, 2022.](https://mlanthology.org/neuripsw/2022/sudhakaran2022neuripsw-skill/)

BibTeX

@inproceedings{sudhakaran2022neuripsw-skill,
  title     = {{Skill Decision Transformer}},
  author    = {Sudhakaran, Shyam and Risi, Sebastian},
  booktitle = {NeurIPS 2022 Workshops: FMDM},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/sudhakaran2022neuripsw-skill/}
}