Contrastive Embedding for Generalized Zero-Shot Learning

Abstract

Generalized zero-shot learning (GZSL) aims to recognize objects from both seen and unseen classes, when only the labeled examples from seen classes are provided. Recent feature generation methods learn a generative model that can synthesize the missing visual features of unseen classes to mitigate the data-imbalance problem in GZSL. However, the original visual feature space is suboptimal for GZSL classification since it lacks discriminative information. To tackle this issue, we propose to integrate the generation model with the embedding model, yielding a hybrid GZSL framework. The hybrid GZSL approach maps both the real and the synthetic samples produced by the generation model into an embedding space, where we perform the final GZSL classification. Specifically, we propose a contrastive embedding (CE) for our hybrid GZSL framework. The proposed contrastive embedding can leverage not only the class-wise supervision but also the instance-wise supervision, where the latter is usually neglected by existing GZSL researches. We evaluate our proposed hybrid GZSL framework with contrastive embedding, named CE-GZSL, on five benchmark datasets. The results show that our CEGZSL method can outperform the state-of-the-arts by a significant margin on three datasets. Our codes are available on https://github.com/Hanzy1996/CE-GZSL.

Cite

Text

Han et al. "Contrastive Embedding for Generalized Zero-Shot Learning." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00240

Markdown

[Han et al. "Contrastive Embedding for Generalized Zero-Shot Learning." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/han2021cvpr-contrastive/) doi:10.1109/CVPR46437.2021.00240

BibTeX

@inproceedings{han2021cvpr-contrastive,
  title     = {{Contrastive Embedding for Generalized Zero-Shot Learning}},
  author    = {Han, Zongyan and Fu, Zhenyong and Chen, Shuo and Yang, Jian},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {2371-2381},
  doi       = {10.1109/CVPR46437.2021.00240},
  url       = {https://mlanthology.org/cvpr/2021/han2021cvpr-contrastive/}
}