Zero Shot Learning via Low-Rank Embedded Semantic AutoEncoder
Abstract
Zero-shot learning (ZSL) has been widely researched and get successful in machine learning. Most existing ZSL methods aim to accurately recognize objects of unseen classes by learning a shared mapping from the feature space to a semantic space. However, such methods did not investigate in-depth whether the mapping can precisely reconstruct the original visual feature. Motivated by the fact that the data have low intrinsic dimensionality e.g. low-dimensional subspace. In this paper, we formulate a novel framework named Low-rank Embedded Semantic AutoEncoder (LESAE) to jointly seek a low-rank mapping to link visual features with their semantic representations. Taking the encoder-decoder paradigm, the encoder part aims to learn a low-rank mapping from the visual feature to the semantic space, while decoder part manages to reconstruct the original data with the learned mapping. In addition, a non-greedy iterative algorithm is adopted to solve our model. Extensive experiments on six benchmark datasets demonstrate its superiority over several state-of-the-art algorithms.
Cite
Text
Liu et al. "Zero Shot Learning via Low-Rank Embedded Semantic AutoEncoder." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/345Markdown
[Liu et al. "Zero Shot Learning via Low-Rank Embedded Semantic AutoEncoder." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/liu2018ijcai-zero/) doi:10.24963/IJCAI.2018/345BibTeX
@inproceedings{liu2018ijcai-zero,
title = {{Zero Shot Learning via Low-Rank Embedded Semantic AutoEncoder}},
author = {Liu, Yang and Gao, Quanxue and Li, Jin and Han, Jungong and Shao, Ling},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {2490-2496},
doi = {10.24963/IJCAI.2018/345},
url = {https://mlanthology.org/ijcai/2018/liu2018ijcai-zero/}
}