The Agent Ohana: Designing Unified Data and Training Pipeline for Effective Agent Learning
Abstract
Autonomous agents powered by large language models (LLMs) have garnered significant research attention. However, fully harnessing the potential of LLMs for agent-based tasks presents inherent challenges due to the heterogeneous nature of diverse data sources featuring multi-turn trajectories. In this paper, we introduce \textbf{AgentOhana} as a comprehensive solution to address these challenges. \textit{AgentOhana} aggregates agent trajectories from distinct environments, spanning a wide array of scenarios. It meticulously standardizes and unifies these trajectories into a consistent format, streamlining the creation of a generic data loader optimized for agent training. Leveraging the data unification, our training pipeline maintains equilibrium across different data sources and preserves independent randomness across devices during dataset partitioning and model training. Additionally, we present \textbf{xLAM-v0.1}, a large action model tailored for AI agents, which demonstrates exceptional performance across various benchmarks. Begin the exploration at \url{https://github.com/SalesforceAIResearch/xLAM}.
Cite
Text
Zhang et al. "The Agent Ohana: Designing Unified Data and Training Pipeline for Effective Agent Learning." ICLR 2024 Workshops: LLMAgents, 2024.Markdown
[Zhang et al. "The Agent Ohana: Designing Unified Data and Training Pipeline for Effective Agent Learning." ICLR 2024 Workshops: LLMAgents, 2024.](https://mlanthology.org/iclrw/2024/zhang2024iclrw-agent/)BibTeX
@inproceedings{zhang2024iclrw-agent,
title = {{The Agent Ohana: Designing Unified Data and Training Pipeline for Effective Agent Learning}},
author = {Zhang, Jianguo and Lan, Tian and Rithesh, R N and Liu, Zhiwei and Yao, Weiran and Tan, Juntao and Hoang, Thai Quoc and Yang, Liangwei and Feng, Yihao and Liu, Zuxin and Zhu, Ming and Awalgaonkar, Tulika Manoj and Niebles, Juan Carlos and Savarese, Silvio and Heinecke, Shelby and Wang, Huan and Xiong, Caiming},
booktitle = {ICLR 2024 Workshops: LLMAgents},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/zhang2024iclrw-agent/}
}