Efficient Domain Adaptation of Robotic Foundation Models via Hypernetwork-Generated LoRA
Abstract
This paper investigates how to efficiently adapt a pre-trained robotic foundation model to a new domain with many different tasks to solve. We introduce Hyper-LoRA, a method built upon LoRA and Hypernetworks (HNs), to make this domain adaptation process both parameter-efficient via low-rank adaptation, and data-efficient by sharing knowledge across tasks in the target domain via the HN. By training Hyper-LoRA on a moderate number of multi-task demonstrations from the target domain, we achieve not only significantly better performance on the training tasks, but also promising zero-shot generalization to unseen tasks.
Cite
Text
Xiong et al. "Efficient Domain Adaptation of Robotic Foundation Models via Hypernetwork-Generated LoRA." NeurIPS 2024 Workshops: AFM, 2024.Markdown
[Xiong et al. "Efficient Domain Adaptation of Robotic Foundation Models via Hypernetwork-Generated LoRA." NeurIPS 2024 Workshops: AFM, 2024.](https://mlanthology.org/neuripsw/2024/xiong2024neuripsw-efficient/)BibTeX
@inproceedings{xiong2024neuripsw-efficient,
title = {{Efficient Domain Adaptation of Robotic Foundation Models via Hypernetwork-Generated LoRA}},
author = {Xiong, Zheng and Sharma, Siddhant and Li, Kang and Vuorio, Risto and Whiteson, Shimon},
booktitle = {NeurIPS 2024 Workshops: AFM},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/xiong2024neuripsw-efficient/}
}