DRESS: Disentangled Representation-Based Self-Supervised Meta-Learning for Diverse Tasks
Abstract
Meta-learning represents a strong class of approaches for solving few-shot learning tasks. Nonetheless, recent research suggests that simply pre-training a generic encoder can potentially surpass meta-learning algorithms. In this paper, we first discuss the reasons why meta-learning fails to stand out in these few-shot learning experiments, and hypothesize that it is due to the few-shot learning tasks lacking diversity. Furthermore, we propose DRESS, a task-agnostic Disentangled REpresentation-based Self-Supervised meta-learning approach that enables fast model adaptation on highly diversified few-shot learning tasks. Specifically, DRESS utilizes disentangled representation learning to create self-supervised tasks that can fuel the meta-training process. We validate the effectiveness of DRESS through experiments on few-shot classification tasks on datasets with multiple factors of variation. Through this paper, we advocate for a re-examination of proper setups for task adaptation studies, and aim to reignite interest in the potential of meta-learning for solving few-shot learning tasks via disentangled representations.
Cite
Text
Cui et al. "DRESS: Disentangled Representation-Based Self-Supervised Meta-Learning for Diverse Tasks." NeurIPS 2024 Workshops: SSL, 2024.Markdown
[Cui et al. "DRESS: Disentangled Representation-Based Self-Supervised Meta-Learning for Diverse Tasks." NeurIPS 2024 Workshops: SSL, 2024.](https://mlanthology.org/neuripsw/2024/cui2024neuripsw-dress/)BibTeX
@inproceedings{cui2024neuripsw-dress,
title = {{DRESS: Disentangled Representation-Based Self-Supervised Meta-Learning for Diverse Tasks}},
author = {Cui, Wei and Sui, Yi and Cresswell, Jesse C. and Golestan, Keyvan},
booktitle = {NeurIPS 2024 Workshops: SSL},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/cui2024neuripsw-dress/}
}