Masked Unsupervised Self-Training for Label-Free Image Classification
Abstract
State-of-the-art computer vision models are mostly trained with supervised learning using human-labeled images, which limits their scalability due to the expensive annotation cost. While self-supervised representation learning has achieved impressive progress, it still requires a second stage of finetuning on labeled data. On the other hand, models pre-trained with large-scale text supervision (e.g., CLIP) have enabled zero-shot transfer to downstream image classification tasks. However, the zero-shot performance of CLIP-like models are often insufficient for real-world adoption. In this paper, we aim to leverage the abundant unlabeled data from a target domain to improve the performance of a pre-trained zero-shot classifier, by unsupervised finetuning of the pre-trained model. We propose Masked Unsupervised Self-Training (MUST), a new approach which leverages two different and complimentary sources of training signals: pseudo-labels and raw images. MUST jointly optimizes three objectives to learn both class-level global feature and pixel-level local feature and enforces a regularization between the two. We demonstrate the efficacy of MUST on 8 downstream tasks across a variety of domains, where it improves upon CLIP by a large margin. MUST also outperforms supervised few-shot adaptation methods. It achieves a top-1 accuracy of 77.7% on ImageNet using ViT-B, +9.4% higher than CLIP, and +6.2% higher than 16-shot CLIP adaptation. Our code is available at https://github.com/salesforce/MUST.
Cite
Text
Li et al. "Masked Unsupervised Self-Training for Label-Free Image Classification." International Conference on Learning Representations, 2023.Markdown
[Li et al. "Masked Unsupervised Self-Training for Label-Free Image Classification." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/li2023iclr-masked/)BibTeX
@inproceedings{li2023iclr-masked,
title = {{Masked Unsupervised Self-Training for Label-Free Image Classification}},
author = {Li, Junnan and Savarese, Silvio and Hoi, Steven},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/li2023iclr-masked/}
}