LEGENT: Open Platform for Embodied Agents

Abstract

Despite advancements in Large Multimodal Models (LMMs), their integration into language-grounded, human-like embodied agents remains incomplete, hindering complex real-life task performance in physical environments. Existing integrations often feature limited open sourcing, challenging collective progress in this field. We introduce LEGENT, an open, scalable platform for developing embodied agents using LMMs. LEGENT offers a dual approach: a rich, interactive 3D environment with communicable and actionable agents, paired with a user-friendly interface, and a sophisticated data generation pipeline utilizing advanced algorithms to exploit supervision from simulated worlds at scale. In our experiments, an embryonic vision-language-action model trained on LEGENT-generated data surpasses GPT-4V in embodied tasks, showcasing promising generalization capabilities.

Cite

Text

Cheng et al. "LEGENT: Open Platform for Embodied Agents." ICML 2024 Workshops: MFM-EAI, 2024.

Markdown

[Cheng et al. "LEGENT: Open Platform for Embodied Agents." ICML 2024 Workshops: MFM-EAI, 2024.](https://mlanthology.org/icmlw/2024/cheng2024icmlw-legent/)

BibTeX

@inproceedings{cheng2024icmlw-legent,
  title     = {{LEGENT: Open Platform for Embodied Agents}},
  author    = {Cheng, Zhili and Hu, Jinyi and Wang, Zhitong and Tu, Yuge and Hu, Shengding and Liu, An and Li, Pengkai and Shi, Lei and Liu, Zhiyuan and Sun, Maosong},
  booktitle = {ICML 2024 Workshops: MFM-EAI},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/cheng2024icmlw-legent/}
}