Planning Immediate Landmarks of Targets for Model-Free Skill Transfer Across Agents
Abstract
In reinforcement learning applications, agents usually need to deal with various input/output features when specified with different state and action spaces by their developers or physical restrictions, indicating re-training from scratch and considerable sample inefficiency, especially when agents follow similar solution steps to achieve tasks. In this paper, we aim to transfer pre-trained skills to alleviate the above challenge. Specifically, we propose PILoT, i.e., Planning Immediate Landmarks of Targets. PILoT utilizes the universal decoupled policy optimization to learn a goal-conditioned state planner; then, we distill a goal-planner to plan immediate landmarks in a model-free style that can be shared among different agents. In our experiments, we show the power of PILoT on various transferring challenges, including few-shot transferring across action spaces and dynamics, from low-dimensional vector states to image inputs, from simple robot to complicated morphology; and we also illustrate PILoT provides a zero-shot transfer solution from a simple 2D navigation task to the harder Ant-Maze task.
Cite
Text
Liu et al. "Planning Immediate Landmarks of Targets for Model-Free Skill Transfer Across Agents." NeurIPS 2022 Workshops: DeepRL, 2022.Markdown
[Liu et al. "Planning Immediate Landmarks of Targets for Model-Free Skill Transfer Across Agents." NeurIPS 2022 Workshops: DeepRL, 2022.](https://mlanthology.org/neuripsw/2022/liu2022neuripsw-planning/)BibTeX
@inproceedings{liu2022neuripsw-planning,
title = {{Planning Immediate Landmarks of Targets for Model-Free Skill Transfer Across Agents}},
author = {Liu, Minghuan and Zhu, Zhengbang and Zhu, Menghui and Zhuang, Yuzheng and Zhang, Weinan and Hao, Jianye},
booktitle = {NeurIPS 2022 Workshops: DeepRL},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/liu2022neuripsw-planning/}
}