A Generalist Agent

Abstract

Inspired by progress in large-scale language modeling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato.

Cite

Text

Reed et al. "A Generalist Agent." Transactions on Machine Learning Research, 2022.

Markdown

[Reed et al. "A Generalist Agent." Transactions on Machine Learning Research, 2022.](https://mlanthology.org/tmlr/2022/reed2022tmlr-generalist/)

BibTeX

@article{reed2022tmlr-generalist,
  title     = {{A Generalist Agent}},
  author    = {Reed, Scott and Zolna, Konrad and Parisotto, Emilio and Colmenarejo, Sergio Gómez and Novikov, Alexander and Barth-maron, Gabriel and Giménez, Mai and Sulsky, Yury and Kay, Jackie and Springenberg, Jost Tobias and Eccles, Tom and Bruce, Jake and Razavi, Ali and Edwards, Ashley and Heess, Nicolas and Chen, Yutian and Hadsell, Raia and Vinyals, Oriol and Bordbar, Mahyar and de Freitas, Nando},
  journal   = {Transactions on Machine Learning Research},
  year      = {2022},
  url       = {https://mlanthology.org/tmlr/2022/reed2022tmlr-generalist/}
}