Robust Subtask Learning for Compositional Generalization
Abstract
Compositional reinforcement learning is a promising approach for training policies to perform complex long-horizon tasks. Typically, a high-level task is decomposed into a sequence of subtasks and a separate policy is trained to perform each subtask. In this paper, we focus on the problem of training subtask policies in a way that they can be used to perform any task; here, a task is given by a sequence of subtasks. We aim to maximize the worst-case performance over all tasks as opposed to the average-case performance. We formulate the problem as a two agent zero-sum game in which the adversary picks the sequence of subtasks. We propose two RL algorithms to solve this game: one is an adaptation of existing multi-agent RL algorithms to our setting and the other is an asynchronous version which enables parallel training of subtask policies. We evaluate our approach on two multi-task environments with continuous states and actions and demonstrate that our algorithms outperform state-of-the-art baselines.
Cite
Text
Jothimurugan et al. "Robust Subtask Learning for Compositional Generalization." International Conference on Machine Learning, 2023.Markdown
[Jothimurugan et al. "Robust Subtask Learning for Compositional Generalization." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/jothimurugan2023icml-robust/)BibTeX
@inproceedings{jothimurugan2023icml-robust,
title = {{Robust Subtask Learning for Compositional Generalization}},
author = {Jothimurugan, Kishor and Hsu, Steve and Bastani, Osbert and Alur, Rajeev},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {15371-15387},
volume = {202},
url = {https://mlanthology.org/icml/2023/jothimurugan2023icml-robust/}
}