Semi-Supervised Classifications via Elastic and Robust Embedding

Abstract

Transductive semi-supervised learning can only predict labels for unlabeled data appearing in training data, and can not predict labels for testing data never appearing in training set. To handle this out-of-sample problem, many inductive methods make a constraint such that the predicted label matrix should be exactly equal to a linear model. In practice, this constraint might be too rigid to capture the manifold structure of data. In this paper, we relax this rigid constraint and propose to use an elastic constraint on the predicted label matrix such that the manifold structure can be better explored. Moreover, since unlabeled data are often very abundant in practice and usually there are some outliers, we use a non-squared loss instead of the traditional squared loss to learn a robust model. The derived problem, although is convex, has so many nonsmooth terms, which make it very challenging to solve. In the paper, we propose an efficient optimization algorithm to solve a more general problem, based on which we find the optimal solution to the derived problem.

Cite

Text

Liu et al. "Semi-Supervised Classifications via Elastic and Robust Embedding." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.10946

Markdown

[Liu et al. "Semi-Supervised Classifications via Elastic and Robust Embedding." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/liu2017aaai-semi/) doi:10.1609/AAAI.V31I1.10946

BibTeX

@inproceedings{liu2017aaai-semi,
  title     = {{Semi-Supervised Classifications via Elastic and Robust Embedding}},
  author    = {Liu, Yun and Guo, Yiming and Wang, Hua and Nie, Feiping and Huang, Heng},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {2294-2300},
  doi       = {10.1609/AAAI.V31I1.10946},
  url       = {https://mlanthology.org/aaai/2017/liu2017aaai-semi/}
}