Learning Evolving Tools for Large Language Models

Abstract

Tool learning enables large language models (LLMs) to interact with external tools and APIs, greatly expanding the application scope of LLMs. However, due to the dynamic nature of external environments, these tools and APIs may become outdated over time, preventing LLMs from correctly invoking tools. Existing research primarily focuses on static environments and overlooks this issue, limiting the adaptability of LLMs in real-world applications. In this paper, we propose ToolEVO, a novel framework designed to enhance the adaptive and reflective capabilities of LLMs against tool variability. By leveraging Monte Carlo Tree Search, ToolEVO facilitates active exploration and interaction of LLMs within dynamic environments, allowing for autonomous self-reflection and self-updating of tool usage based on environmental feedback. Additionally, we introduce ToolQA-D, a benchmark specifically designed to evaluate the impact of tool variability. Extensive experiments demonstrate the effectiveness and stability of our approach, highlighting the importance of adaptability to tool variability for effective tool learning.

Cite

Text

Chen et al. "Learning Evolving Tools for Large Language Models." International Conference on Learning Representations, 2025.

Markdown

[Chen et al. "Learning Evolving Tools for Large Language Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/chen2025iclr-learning-b/)

BibTeX

@inproceedings{chen2025iclr-learning-b,
  title     = {{Learning Evolving Tools for Large Language Models}},
  author    = {Chen, Guoxin and Zhang, Zhong and Cong, Xin and Guo, Fangda and Wu, Yesai and Lin, Yankai and Feng, Wenzheng and Wang, Yasheng},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/chen2025iclr-learning-b/}
}