If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to Serve as Intelligent Agents

Abstract

The prominent large language models (LLMs) of today differ from past language models not only in size, but also in the fact that they are trained on a combination of natural language and code. As a medium between humans and computers, code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity. In this survey, we present an overview of the various benefits of integrating code into LLMs' training data. In addition, we trace how these profound capabilities of LLMs, brought by code, have led to their emergence as intelligent agents (IAs). Finally, we present several key challenges and future directions of empowering code-LLMs to serve as IAs.

Cite

Text

Yang et al. "If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to Serve as Intelligent Agents." ICLR 2024 Workshops: LLMAgents, 2024.

Markdown

[Yang et al. "If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to Serve as Intelligent Agents." ICLR 2024 Workshops: LLMAgents, 2024.](https://mlanthology.org/iclrw/2024/yang2024iclrw-llm/)

BibTeX

@inproceedings{yang2024iclrw-llm,
  title     = {{If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to Serve as Intelligent Agents}},
  author    = {Yang, Ke and Liu, Jiateng and Wu, John and Yang, Chaoqi and Fung, Yi and Li, Sha and Huang, Zixuan and Cao, Xu and Wang, Xingyao and Ji, Heng and Zhai, ChengXiang},
  booktitle = {ICLR 2024 Workshops: LLMAgents},
  year      = {2024},
  url       = {https://mlanthology.org/iclrw/2024/yang2024iclrw-llm/}
}