A Joint Generative Model for Zero-Shot Learning
Abstract
Zero-shot learning (ZSL) is a challenging task due to the lack of data from unseen classes during training. Existing methods tend to have the strong bias towards seen classes, which is also known as the domain shift problem. To mitigate the gap between seen and unseen class data, we propose a joint generative model to synthesize features as the replacement for unseen data. Based on the generated features, the conventional ZSL problem can be tackled in a supervised way. Specifically, our framework integrates Variational Autoencoders (VAE) and Generative Adversarial Networks (GAN) conditioned on class-level semantic attributes for feature generation based on element-wise and holistic reconstruction. A categorization network acts as the additional guide to generate features beneficial for the subsequent classification task. Moreover, we propose a perceptual reconstruction loss to preserve semantic similarities. Experimental results on five benchmarks show the superiority of our framework over the state-of-the-art approaches in terms of both conventional ZSL and generalized ZSL settings.
Cite
Text
Gao et al. "A Joint Generative Model for Zero-Shot Learning." European Conference on Computer Vision Workshops, 2018. doi:10.1007/978-3-030-11018-5_50Markdown
[Gao et al. "A Joint Generative Model for Zero-Shot Learning." European Conference on Computer Vision Workshops, 2018.](https://mlanthology.org/eccvw/2018/gao2018eccvw-joint/) doi:10.1007/978-3-030-11018-5_50BibTeX
@inproceedings{gao2018eccvw-joint,
title = {{A Joint Generative Model for Zero-Shot Learning}},
author = {Gao, Rui and Hou, Xingsong and Qin, Jie and Liu, Li and Zhu, Fan and Zhang, Zhao},
booktitle = {European Conference on Computer Vision Workshops},
year = {2018},
pages = {631-646},
doi = {10.1007/978-3-030-11018-5_50},
url = {https://mlanthology.org/eccvw/2018/gao2018eccvw-joint/}
}