Active Fine-Tuning of Multi-Task Policies
Abstract
Pre-trained generalist policies are rapidly gaining relevance in robot learning due to their promise of fast adaptation to novel, in-domain tasks. This adaptation often relies on collecting new demonstrations for a specific task of interest and applying imitation learning algorithms, such as behavioral cloning. However, as soon as several tasks need to be learned, we must decide which tasks should be demonstrated and how often? We study this multi-task problem and explore an interactive framework in which the agent adaptively selects the tasks to be demonstrated. We propose AMF (Active Multi-task Fine-tuning), an algorithm to maximize multi-task policy performance under a limited demonstration budget by collecting demonstrations yielding the largest information gain on the expert policy. We derive performance guarantees for AMF under regularity assumptions and demonstrate its empirical effectiveness to efficiently fine-tune neural policies in complex and high-dimensional environments.
Cite
Text
Bagatella et al. "Active Fine-Tuning of Multi-Task Policies." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Bagatella et al. "Active Fine-Tuning of Multi-Task Policies." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/bagatella2025icml-active/)BibTeX
@inproceedings{bagatella2025icml-active,
title = {{Active Fine-Tuning of Multi-Task Policies}},
author = {Bagatella, Marco and Hübotter, Jonas and Martius, Georg and Krause, Andreas},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {2409-2441},
volume = {267},
url = {https://mlanthology.org/icml/2025/bagatella2025icml-active/}
}