Premier-TACO Is a Few-Shot Policy Learner: Pretraining Multitask Representation via Temporal Action-Driven Contrastive Loss

Abstract

We present Premier-TACO, a multitask feature representation learning approach designed to improve few-shot policy learning efficiency in sequential decision-making tasks. Premier-TACO leverages a subset of multitask offline datasets for pretraining a general feature representation, which captures critical environmental dynamics and is fine-tuned using minimal expert demonstrations. It advances the temporal action contrastive learning (TACO) objective, known for state-of-the-art results in visual control tasks, by incorporating a novel negative example sampling strategy. This strategy is crucial in significantly boosting TACO’s computational efficiency, making large-scale multitask offline pretraining feasible. Our extensive empirical evaluation in a diverse set of continuous control benchmarks including Deepmind Control Suite, MetaWorld, and LIBERO demonstrate Premier-TACO’s effective- ness in pretraining visual representations, significantly enhancing few-shot imitation learning of novel tasks.

Cite

Text

Zheng et al. "Premier-TACO Is a Few-Shot Policy Learner: Pretraining Multitask Representation via Temporal Action-Driven Contrastive Loss." International Conference on Machine Learning, 2024.

Markdown

[Zheng et al. "Premier-TACO Is a Few-Shot Policy Learner: Pretraining Multitask Representation via Temporal Action-Driven Contrastive Loss." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/zheng2024icml-premiertaco/)

BibTeX

@inproceedings{zheng2024icml-premiertaco,
  title     = {{Premier-TACO Is a Few-Shot Policy Learner: Pretraining Multitask Representation via Temporal Action-Driven Contrastive Loss}},
  author    = {Zheng, Ruijie and Liang, Yongyuan and Wang, Xiyao and Ma, Shuang and Daumé Iii, Hal and Xu, Huazhe and Langford, John and Palanisamy, Praveen and Basu, Kalyan Shankar and Huang, Furong},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {61413-61431},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/zheng2024icml-premiertaco/}
}