BAKU: An Efficient Transformer for Multi-Task Policy Learning
Abstract
Training generalist agents capable of solving diverse tasks is challenging, often requiring large datasets of expert demonstrations. This is particularly problematic in robotics, where each data point requires physical execution of actions in the real world. Thus, there is a pressing need for architectures that can effectively leverage the available training data. In this work, we present BAKU, a simple transformer architecture that enables efficient learning of multi-task robot policies. BAKU builds upon recent advancements in offline imitation learning and meticulously combines observation trunks, action chunking, multi-sensory observations, and action heads to substantially improve upon prior work. Our experiments on 129 simulated tasks across LIBERO, Meta-World suite, and the Deepmind Control suite exhibit an overall 18% absolute improvement over RT-1 and MT-ACT, with a 36% improvement on the harder LIBERO benchmark. On 30 real-world manipulation tasks, given an average of just 17 demonstrations per task, BAKU achieves a 91% success rate. Videos of the robot are best viewed at baku-robot.github.io.
Cite
Text
Haldar et al. "BAKU: An Efficient Transformer for Multi-Task Policy Learning." Neural Information Processing Systems, 2024. doi:10.52202/079017-4484Markdown
[Haldar et al. "BAKU: An Efficient Transformer for Multi-Task Policy Learning." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/haldar2024neurips-baku/) doi:10.52202/079017-4484BibTeX
@inproceedings{haldar2024neurips-baku,
title = {{BAKU: An Efficient Transformer for Multi-Task Policy Learning}},
author = {Haldar, Siddhant and Peng, Zhuoran and Pinto, Lerrel},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-4484},
url = {https://mlanthology.org/neurips/2024/haldar2024neurips-baku/}
}