Pretrained Encoders Are All You Need
Abstract
Data-efficiency and generalization are key challenges in deep learning and deep reinforcement learning as many models are trained on large-scale, domain-specific, and expensive-to-label datasets. Self-supervised models trained on large-scale uncurated datasets have shown successful transfer to diverse settings. We investigate using pretrained image representations and spatio-temporal attention for state representation learning in Atari. We also explore fine-tuning pretrained representations with self-supervised techniques, i.e., contrastive predictive coding, spatio-temporal contrastive learning, and augmentations. Our results show that pretrained representations are at par with state-of-the-art self-supervised methods trained on domain-specific data. Pretrained representations, thus, yield data and compute-efficient state representations.
Cite
Text
Khan et al. "Pretrained Encoders Are All You Need." ICML 2021 Workshops: URL, 2021.Markdown
[Khan et al. "Pretrained Encoders Are All You Need." ICML 2021 Workshops: URL, 2021.](https://mlanthology.org/icmlw/2021/khan2021icmlw-pretrained/)BibTeX
@inproceedings{khan2021icmlw-pretrained,
title = {{Pretrained Encoders Are All You Need}},
author = {Khan, Mina and Rane, Advait Prashant and P, Srivatsa and Chenniappa, Shriram and Anand, Rishabh and Ozair, Sherjil and Maes, Patricia},
booktitle = {ICML 2021 Workshops: URL},
year = {2021},
url = {https://mlanthology.org/icmlw/2021/khan2021icmlw-pretrained/}
}