Behaviour Distillation
Abstract
Dataset distillation aims to condense large datasets into a small number of synthetic examples that can be used as drop-in replacements when training new models. It has applications to interpretability, neural architecture search, privacy, and continual learning. Despite strong successes in supervised domains, such methods have not yet been extended to reinforcement learning, where the lack of a fixed dataset renders most distillation methods unusable. Filling the gap, we formalize $\textit{behaviour distillation}$, a setting that aims to discover and then condense the information required for training an expert policy into a synthetic dataset of state-action pairs, $\textit{without access to expert data}$. We then introduce Hallucinating Datasets with Evolution Strategies (HaDES), a method for behaviour distillation that can discover datasets of $\textit{just four}$ state-action pairs which, under supervised learning, train agents to competitive performance levels in continuous control tasks. We show that these datasets generalize out of distribution to training policies with a wide range of architectures and hyperparameters. We also demonstrate application to a downstream task, namely training multi-task agents in a zero-shot fashion. Beyond behaviour distillation, HaDES provides significant improvements in neuroevolution for RL over previous approaches and achieves SoTA results on one standard supervised dataset distillation task. Finally, we show that visualizing the synthetic datasets can provide human-interpretable task insights.
Cite
Text
Lupu et al. "Behaviour Distillation." International Conference on Learning Representations, 2024.Markdown
[Lupu et al. "Behaviour Distillation." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/lupu2024iclr-behaviour/)BibTeX
@inproceedings{lupu2024iclr-behaviour,
title = {{Behaviour Distillation}},
author = {Lupu, Andrei and Lu, Chris and Liesen, Jarek Luca and Lange, Robert Tjarko and Foerster, Jakob Nicolaus},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/lupu2024iclr-behaviour/}
}