Agent Skill Acquisition for Large Language Models via CycleQD
Abstract
Training Large Language Models (LLMs) to acquire various skills remains a challenging endeavor. Conventional training approaches often struggle with data distribution imbalances and inadequacies in objective functions that do not align well with task-specific performance. To address these challenges, we introduce CycleQD, a novel approach that leverages the Quality Diversity (QD) framework through a cyclic adaptation of the MAP-Elites algorithm. In this framework, each task's performance metric is alternated as the quality measure while the others serve as the behavioral characteristics. This cyclic focus on individual tasks allows for concentrated effort on one task at a time, eliminating the need for data ratio tuning and simplifying the design of the objective function. Empirical results indicate that applying CycleQD to 8-billion parameter models not only enables them to surpass traditional fine-tuning methods in coding, operating systems, and database tasks, but also achieves performance on par with GPT-3.5-TURBO across these domains. Our code is available at \url{https://github.com/SakanaAI/CycleQD}.
Cite
Text
Kuroki et al. "Agent Skill Acquisition for Large Language Models via CycleQD." NeurIPS 2024 Workshops: Continual_FoMo, 2024.Markdown
[Kuroki et al. "Agent Skill Acquisition for Large Language Models via CycleQD." NeurIPS 2024 Workshops: Continual_FoMo, 2024.](https://mlanthology.org/neuripsw/2024/kuroki2024neuripsw-agent-a/)BibTeX
@inproceedings{kuroki2024neuripsw-agent-a,
title = {{Agent Skill Acquisition for Large Language Models via CycleQD}},
author = {Kuroki, So and Nakamura, Taishi and Akiba, Takuya and Tang, Yujin},
booktitle = {NeurIPS 2024 Workshops: Continual_FoMo},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/kuroki2024neuripsw-agent-a/}
}