CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models

Abstract

Open large language models (LLMs) have significantly advanced the field of natural language processing, showcasing impressive performance across various tasks. Despite the significant advancements in LLMs, their effective operation still relies heavily on human input to accurately guide the dialogue flow, with agent tuning being a crucial optimization technique that involves human adjustments to the model for better response to such guidance. Addressing this dependency, our work introduces the TinyAgent model, trained on a meticulously curated high-quality dataset. We also present the Collaborative Multi-Agent Tuning (CMAT) framework, an innovative system designed to augment language agent capabilities through adaptive weight updates based on environmental feedback. This framework fosters collaborative learning and real-time adaptation among multiple intelligent agents, enhancing their context-awareness and long-term memory. In this research, we propose a new communication agent framework that integrates multi-agent systems with environmental feedback mechanisms, offering a scalable method to explore cooperative behaviors. Notably, our TinyAgent-7B model exhibits performance on par with GPT-3.5, despite having fewer parameters, signifying a substantial improvement in the efficiency and effectiveness of LLMs.

Cite

Text

梁学辰 et al. "CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models." ICLR 2025 Workshops: AgenticAI, 2025.

Markdown

[梁学辰 et al. "CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models." ICLR 2025 Workshops: AgenticAI, 2025.](https://mlanthology.org/iclrw/2025/2025iclrw-cmat/)

BibTeX

@inproceedings{2025iclrw-cmat,
  title     = {{CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models}},
  author    = {梁学辰,  and He, Yangfan and Tao, Meiling and Xia, Yinghui and Wang, Yijin and Wang, Jianhui and Li, Kun and Su, Jiayi and Shi, Tianyu and Wang, Jun and Jingsong, Yang},
  booktitle = {ICLR 2025 Workshops: AgenticAI},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/2025iclrw-cmat/}
}