The Embodied World Model Based on LLM with Visual Information and Prediction-Oriented Prompts

Abstract

In recent years, as machine learning, particularly for vision and language understanding, has been improved, research in embodied AI has also evolved. VOYAGER is a well-known LLM-based embodied AI that enables autonomous exploration in the Minecraft world. Still, it has issues such as the underutilization of visual data and insufficient functionality as a world model. In this research, the possibility of utilizing visual data and the function of LLM as a world model were investigated to improve the performance of embodied AI. The experimental results revealed that LLM can extract necessary information from visual data, and the utilization of the information improves its performance. It was also suggested that devised prompts could bring out the LLM’s function as a world model.

Cite

Text

Haijima et al. "The Embodied World Model Based on LLM with Visual Information and Prediction-Oriented Prompts." ICML 2024 Workshops: MFM-EAI, 2024.

Markdown

[Haijima et al. "The Embodied World Model Based on LLM with Visual Information and Prediction-Oriented Prompts." ICML 2024 Workshops: MFM-EAI, 2024.](https://mlanthology.org/icmlw/2024/haijima2024icmlw-embodied/)

BibTeX

@inproceedings{haijima2024icmlw-embodied,
  title     = {{The Embodied World Model Based on LLM with Visual Information and Prediction-Oriented Prompts}},
  author    = {Haijima, Wakana and Nakakubo, Kou and Suzuki, Masahiro and Matsuo, Yutaka},
  booktitle = {ICML 2024 Workshops: MFM-EAI},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/haijima2024icmlw-embodied/}
}