REGENT: A Retrieval-Augmented Generalist Agent That Can Act In-Context in New Environments
Abstract
Building generalist agents that can rapidly adapt to new environments is a key challenge for deploying AI in the digital and real worlds. Is scaling current agent architectures the most effective way to build generalist agents? We propose a novel approach to pre-train relatively small policies on relatively small datasets and adapt them to unseen environments via in-context learning, without any finetuning. Our key idea is that retrieval offers a powerful bias for fast adaptation. Indeed, we demonstrate that even a simple retrieval-based 1-nearest neighbor agent offers a surprisingly strong baseline for today's state-of-the-art generalist agents. From this starting point, we construct a semi-parametric agent, REGENT, that trains a transformer-based policy on sequences of queries and retrieved neighbors. REGENT can generalize to unseen robotics and game-playing environments via retrieval augmentation and in-context learning, achieving this with up to 3x fewer parameters and up to an order-of-magnitude fewer pre-training datapoints, significantly outperforming today's state-of-the-art generalist agents.
Cite
Text
Sridhar et al. "REGENT: A Retrieval-Augmented Generalist Agent That Can Act In-Context in New Environments." International Conference on Learning Representations, 2025.Markdown
[Sridhar et al. "REGENT: A Retrieval-Augmented Generalist Agent That Can Act In-Context in New Environments." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/sridhar2025iclr-regent/)BibTeX
@inproceedings{sridhar2025iclr-regent,
title = {{REGENT: A Retrieval-Augmented Generalist Agent That Can Act In-Context in New Environments}},
author = {Sridhar, Kaustubh and Dutta, Souradeep and Jayaraman, Dinesh and Lee, Insup},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/sridhar2025iclr-regent/}
}