Neural Fine-Tuning Search for Few-Shot Learning
Abstract
In few-shot recognition, a classifier that has been trained on one set of classes is required to rapidly adapt and generalize to a disjoint, novel set of classes. To that end, recent studies have shown the efficacy of fine-tuning with carefully-crafted adaptation architectures. However this raises the question of: How can one design the optimal adaptation strategy? In this paper, we study this question through the lens of neural architecture search (NAS). Given a pre-trained neural network, our algorithm discovers the optimal arrangement of adapters, which layers to keep frozen, and which to fine-tune. We demonstrate the generality of our NAS method by applying it to both residual networks and vision transformers and report state-of-the-art performance on Meta-Dataset and Meta-Album.
Cite
Text
Eustratiadis et al. "Neural Fine-Tuning Search for Few-Shot Learning." International Conference on Learning Representations, 2024.Markdown
[Eustratiadis et al. "Neural Fine-Tuning Search for Few-Shot Learning." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/eustratiadis2024iclr-neural/)BibTeX
@inproceedings{eustratiadis2024iclr-neural,
title = {{Neural Fine-Tuning Search for Few-Shot Learning}},
author = {Eustratiadis, Panagiotis and Dudziak, Łukasz and Li, Da and Hospedales, Timothy},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/eustratiadis2024iclr-neural/}
}