Bootstrapping Cognitive Agents with a Large Language Model
Abstract
Large language models contain noisy general knowledge of the world, yet are hard to train or fine-tune. In contrast cognitive architectures have excellent interpretability and are flexible to update but require a lot of manual work to instantiate. In this work, we combine the best of both worlds: bootstrapping a cognitive-based model with the noisy knowledge encoded in large language models. Through an embodied agent doing kitchen tasks, we show that our proposed framework yields better efficiency compared to an agent entirely based on large language models. Our experiments also indicate that the cognitive agent bootstrapped using this framework can generalize to novel environments and be scaled to complex tasks.
Cite
Text
Zhu and Simmons. "Bootstrapping Cognitive Agents with a Large Language Model." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I1.27822Markdown
[Zhu and Simmons. "Bootstrapping Cognitive Agents with a Large Language Model." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/zhu2024aaai-bootstrapping/) doi:10.1609/AAAI.V38I1.27822BibTeX
@inproceedings{zhu2024aaai-bootstrapping,
title = {{Bootstrapping Cognitive Agents with a Large Language Model}},
author = {Zhu, Feiyu and Simmons, Reid G.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {655-663},
doi = {10.1609/AAAI.V38I1.27822},
url = {https://mlanthology.org/aaai/2024/zhu2024aaai-bootstrapping/}
}