Zero-Shot Logit Adjustment
Abstract
Semantic-descriptor-based Generalized Zero-Shot Learning (GZSL) poses challenges in recognizing novel classes in the test phase. The development of generative models enables current GZSL techniques to probe further into the semantic-visual link, culminating in a two-stage form that includes a generator and a classifier. However, existing generation-based methods focus on enhancing the generator's effect while neglecting the improvement of the classifier. In this paper, we first analyze of two properties of the generated pseudo unseen samples: bias and homogeneity. Then, we perform variational Bayesian inference to back-derive the evaluation metrics, which reflects the balance of the seen and unseen classes. As a consequence of our derivation, the aforementioned two properties are incorporated into the classifier training as seen-unseen priors via logit adjustment. The Zero-Shot Logit Adjustment further puts semantic-based classifiers into effect in generation-based GZSL. Our experiments demonstrate that the proposed technique achieves state-of-the-art when combined with the basic generator, and it can improve various generative Zero-Shot Learning frameworks. Our codes are available on https://github.com/cdb342/IJCAI-2022-ZLA.
Cite
Text
Chen et al. "Zero-Shot Logit Adjustment." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/114Markdown
[Chen et al. "Zero-Shot Logit Adjustment." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/chen2022ijcai-zero/) doi:10.24963/IJCAI.2022/114BibTeX
@inproceedings{chen2022ijcai-zero,
title = {{Zero-Shot Logit Adjustment}},
author = {Chen, Dubing and Shen, Yuming and Zhang, Haofeng and Torr, Philip H. S.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2022},
pages = {813-819},
doi = {10.24963/IJCAI.2022/114},
url = {https://mlanthology.org/ijcai/2022/chen2022ijcai-zero/}
}