Semi-Supervised Learning with Very Few Labeled Training Examples

Abstract

In semi-supervised learning, a number of labeled exam-ples are usually required for training an initial weakly useful predictor which is in turn used for exploiting the unlabeled examples. However, in many real-world applications there may exist very few labeled train-ing examples, which makes the weakly useful pre-dictor difficult to generate, and therefore these semi-supervised learning methods cannot be applied. This paper proposes a method working under a two-view set-ting. By taking advantages of the correlations between the views using canonical component analysis, the pro-posed method can perform semi-supervised learning with only one labeled training example. Experiments and an application to content-based image retrieval val-idate the effectiveness of the proposed method.

Cite

Text

Zhou et al. "Semi-Supervised Learning with Very Few Labeled Training Examples." AAAI Conference on Artificial Intelligence, 2007.

Markdown

[Zhou et al. "Semi-Supervised Learning with Very Few Labeled Training Examples." AAAI Conference on Artificial Intelligence, 2007.](https://mlanthology.org/aaai/2007/zhou2007aaai-semi/)

BibTeX

@inproceedings{zhou2007aaai-semi,
  title     = {{Semi-Supervised Learning with Very Few Labeled Training Examples}},
  author    = {Zhou, Zhi-Hua and Zhan, De-Chuan and Yang, Qiang},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2007},
  pages     = {675-680},
  url       = {https://mlanthology.org/aaai/2007/zhou2007aaai-semi/}
}