Foundation Policies with Hilbert Representations
Abstract
Unsupervised and self-supervised objectives, such as next token prediction, have enabled pre-training generalist models from large amounts of unlabeled data. In reinforcement learning (RL), however, finding a truly general and scalable unsupervised pre-training objective for generalist policies from offline data remains a major open question. While a number of methods have been proposed to enable generic self-supervised RL, based on principles such as goal-conditioned RL, behavioral cloning, and unsupervised skill learning, such methods remain limited in terms of either the diversity of the discovered behaviors, the need for high-quality demonstration data, or the lack of a clear adaptation mechanism for downstream tasks. In this work, we propose a novel unsupervised framework to pre-train generalist policies that capture diverse, optimal, long-horizon behaviors from unlabeled offline data such that they can be quickly adapted to any arbitrary new tasks in a zero-shot manner. Our key insight is to learn a structured representation that preserves the temporal structure of the underlying environment, and then to span this learned latent space with directional movements, which enables various zero-shot policy “prompting” schemes for downstream tasks. Through our experiments on simulated robotic locomotion and manipulation benchmarks, we show that our unsupervised policies can solve goal-conditioned and general RL tasks in a zero-shot fashion, even often outperforming prior methods designed specifically for each setting. Our code and videos are available at https://seohong.me/projects/hilp/
Cite
Text
Park et al. "Foundation Policies with Hilbert Representations." International Conference on Machine Learning, 2024.Markdown
[Park et al. "Foundation Policies with Hilbert Representations." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/park2024icml-foundation/)BibTeX
@inproceedings{park2024icml-foundation,
title = {{Foundation Policies with Hilbert Representations}},
author = {Park, Seohong and Kreiman, Tobias and Levine, Sergey},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {39737-39761},
volume = {235},
url = {https://mlanthology.org/icml/2024/park2024icml-foundation/}
}