SEAL: SEmantic-Augmented Imitation Learning via Language Model

Abstract

Hierarchical Imitation Learning (HIL) is effective for long-horizon decision-making, but it often requires extensive expert demonstrations and precise supervisory labels. In this work, we introduce SEAL, a novel framework that leverages the semantic and world knowledge embedded in Large Language Models (LLMs) to autonomously define sub-goal spaces and pre-label states with semantically meaningful sub-goal representations, without requiring prior task hierarchy knowledge. SEAL utilizes a dual-encoder architecture that combines LLM-guided supervised sub-goal learning with unsupervised Vector Quantization (VQ) to enhance the robustness of sub-goal representations. Additionally, SEAL incorporates a transition-augmented low-level planner, which improves adaptation to sub-goal transitions. Our experimental results demonstrate that SEAL outperforms state-of-the-art HIL and LLM-based planning approaches, particularly when working with small expert datasets and complex long-horizon tasks.

Cite

Text

Gu et al. "SEAL: SEmantic-Augmented Imitation Learning via Language Model." ICLR 2025 Workshops: World_Models, 2025.

Markdown

[Gu et al. "SEAL: SEmantic-Augmented Imitation Learning via Language Model." ICLR 2025 Workshops: World_Models, 2025.](https://mlanthology.org/iclrw/2025/gu2025iclrw-seal/)

BibTeX

@inproceedings{gu2025iclrw-seal,
  title     = {{SEAL: SEmantic-Augmented Imitation Learning via Language Model}},
  author    = {Gu, Chengyang and Pan, Yuxin and Bai, Haotian and Xiong, Hui and Chen, Yize},
  booktitle = {ICLR 2025 Workshops: World_Models},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/gu2025iclrw-seal/}
}