Compositional Foundation Models for Hierarchical Planning
Abstract
To make effective decisions in novel environments with long-horizon goals, it is crucial to engage in hierarchical reasoning across spatial and temporal scales. This entails planning abstract subgoal sequences, visually reasoning about the underlying plans, and executing actions in accordance with the devised plan through visual-motor control. We propose *Compositional Foundation Models for Hierarchical Planning* (HiP), a foundation model which leverages multiple *expert* foundation model, trained *individually* on language, vision and action data, jointly together to solve long-horizon tasks. We use a large language model to construct symbolic plans that are grounded in the environment through a large video diffusion model. Generated video plans are then grounded to visual-motor control, through an inverse dynamics model that infers actions from generated videos. To enable effective reasoning within this hierarchy, we enforce consistency between the models via *iterative refinement*. We illustrate the efficacy and adaptability of our approach in three different long-horizon table-top manipulation tasks.
Cite
Text
Ajay et al. "Compositional Foundation Models for Hierarchical Planning." NeurIPS 2023 Workshops: FMDM, 2023.Markdown
[Ajay et al. "Compositional Foundation Models for Hierarchical Planning." NeurIPS 2023 Workshops: FMDM, 2023.](https://mlanthology.org/neuripsw/2023/ajay2023neuripsw-compositional/)BibTeX
@inproceedings{ajay2023neuripsw-compositional,
title = {{Compositional Foundation Models for Hierarchical Planning}},
author = {Ajay, Anurag and Han, Seungwook and Du, Yilun and Li, Shuang and Gupta, Abhi and Jaakkola, Tommi and Tenenbaum, Joshua and Kaelbling, Leslie and Srivastava, Akash and Agrawal, Pulkit},
booktitle = {NeurIPS 2023 Workshops: FMDM},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/ajay2023neuripsw-compositional/}
}