Transductive Zero-Shot Learning via Visual Center Adaptation
Abstract
In this paper, we propose a Visual Center Adaptation Method (VCAM) to address the domain shift problem in zero-shot learning. For the seen classes in the training data, VCAM builds an embedding space by learning the mapping from semantic space to some visual centers. While for unseen classes in the test data, the construction of embedding space is constrained by a symmetric Chamfer-distance term, aiming to adapt the distribution of the synthetic visual centers to that of the real cluster centers. Therefore the learned embedding space can generalize the unseen classes well. Experiments on two widely used datasets demonstrate that our model significantly outperforms state-of-the-art methods.
Cite
Text
Wan et al. "Transductive Zero-Shot Learning via Visual Center Adaptation." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.330110059Markdown
[Wan et al. "Transductive Zero-Shot Learning via Visual Center Adaptation." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/wan2019aaai-transductive/) doi:10.1609/AAAI.V33I01.330110059BibTeX
@inproceedings{wan2019aaai-transductive,
title = {{Transductive Zero-Shot Learning via Visual Center Adaptation}},
author = {Wan, Ziyu and Li, Yan and Yang, Min and Zhang, Junge},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {10059-10060},
doi = {10.1609/AAAI.V33I01.330110059},
url = {https://mlanthology.org/aaai/2019/wan2019aaai-transductive/}
}