Efficient Task Adaptation by Mixing Discovered Skills

Abstract

Unsupervised skill discovery is one of the approaches by which the agent learns potentially useful and distinct behaviors without any explicit reward. The agent is then expected to quickly solve downstream tasks by properly using a set of discovered skills rather than learning everything from scratch. However, it is non-trivial to optimally utilize the discovered skills for each task, which can be viewed as a fine-tuning method, and this has been less considered in the literature in spite of its importance. In this paper, we compare some fine-tuning methods showing how they inefficiently utilize the discovered skills and also propose new methods, which are sample-efficient and effective by interpreting the skills as a perspective of how an agent transforms the input state. Our code is available at https://github.com/jsrimr/unsupervisedRL

Cite

Text

Rhim et al. "Efficient Task Adaptation by Mixing Discovered Skills." ICML 2022 Workshops: Pre-Training, 2022.

Markdown

[Rhim et al. "Efficient Task Adaptation by Mixing Discovered Skills." ICML 2022 Workshops: Pre-Training, 2022.](https://mlanthology.org/icmlw/2022/rhim2022icmlw-efficient/)

BibTeX

@inproceedings{rhim2022icmlw-efficient,
  title     = {{Efficient Task Adaptation by Mixing Discovered Skills}},
  author    = {Rhim, Jungsub and Yang, Eunseok and Kim, Taesup},
  booktitle = {ICML 2022 Workshops: Pre-Training},
  year      = {2022},
  url       = {https://mlanthology.org/icmlw/2022/rhim2022icmlw-efficient/}
}