SkillAct: Using Skill Abstractions Improves LLM Agents
Abstract
Complex sequential decision-making tasks often require hierarchical thinking and abstraction: breaking down these tasks into simpler subtasks that can be solved with reusable behaviors, or *skills*. In this work, we show that large language models (LLMs) can benefit from using skill abstractions to solve interactive tasks successfully. We propose a simple prompting approach named **SkillAct**, which can extend existing prompting approaches. In addition, we demonstrate that these skill abstractions can be *learned* from few-shot demonstrations by prompting LLMs. We demonstrate that **SkillAct** improves the performance of existing approaches such as ReAct on the interactive task benchmark ALFWorld.
Cite
Text
Liu et al. "SkillAct: Using Skill Abstractions Improves LLM Agents." ICML 2024 Workshops: LLMs_and_Cognition, 2024.Markdown
[Liu et al. "SkillAct: Using Skill Abstractions Improves LLM Agents." ICML 2024 Workshops: LLMs_and_Cognition, 2024.](https://mlanthology.org/icmlw/2024/liu2024icmlw-skillact/)BibTeX
@inproceedings{liu2024icmlw-skillact,
title = {{SkillAct: Using Skill Abstractions Improves LLM Agents}},
author = {Liu, Anthony Zhe and Choi, Jongwook and Sohn, Sungryull and Fu, Yao and Kim, Jaekyeom and Kim, Dong-Ki and Wang, Xinhe and Yoo, Jaewon and Lee, Honglak},
booktitle = {ICML 2024 Workshops: LLMs_and_Cognition},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/liu2024icmlw-skillact/}
}