From Zero-Shot Learning to Conventional Supervised Classification: Unseen Visual Data Synthesis

Abstract

Robust object recognition systems usually rely on powerful feature extraction mechanisms from a large number of real images. However, in many realistic applications, collecting sufficient images for ever-growing new classes is unattainable. In this paper, we propose a new Zero-shot learning (ZSL) framework that can synthesise visual features for unseen classes without acquiring real images. Using the proposed Unseen Visual Data Synthesis (UVDS) algorithm, semantic attributes are effectively utilised as an intermediate clue to synthesise unseen visual features at the training stage. Hereafter, ZSL recognition is converted into the conventional supervised problem, i.e. the synthesised visual features can be straightforwardly fed to typical classifiers such as SVM. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data. Extensive experimental results manifest that our proposed approach significantly improve the state-of-the-art results.

Cite

Text

Long et al. "From Zero-Shot Learning to Conventional Supervised Classification: Unseen Visual Data Synthesis." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.653

Markdown

[Long et al. "From Zero-Shot Learning to Conventional Supervised Classification: Unseen Visual Data Synthesis." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/long2017cvpr-zeroshot/) doi:10.1109/CVPR.2017.653

BibTeX

@inproceedings{long2017cvpr-zeroshot,
  title     = {{From Zero-Shot Learning to Conventional Supervised Classification: Unseen Visual Data Synthesis}},
  author    = {Long, Yang and Liu, Li and Shao, Ling and Shen, Fumin and Ding, Guiguang and Han, Jungong},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2017},
  doi       = {10.1109/CVPR.2017.653},
  url       = {https://mlanthology.org/cvpr/2017/long2017cvpr-zeroshot/}
}