Semi-Supervised Learning for Structured Output Variables

Abstract

The problem of learning a mapping between input and structured, interdependent output variables covers sequential, spatial, and relational learning as well as predicting recursive structures. Joint feature representations of the input and output variables have paved the way to leveraging discriminative learners such as SVMs to this class of problems. We address the problem of semi-supervised learning in joint input output spaces. The co-training approach is based on the principle of maximizing the consensus among multiple independent hypotheses; we develop this principle into a semi-supervised support vector learning algorithm for joint input output spaces and arbitrary loss functions. Experiments investigate the benefit of semi-supervised structured models in terms of accuracy and F1 score.

Cite

Text

Brefeld and Scheffer. "Semi-Supervised Learning for Structured Output Variables." International Conference on Machine Learning, 2006. doi:10.1145/1143844.1143863

Markdown

[Brefeld and Scheffer. "Semi-Supervised Learning for Structured Output Variables." International Conference on Machine Learning, 2006.](https://mlanthology.org/icml/2006/brefeld2006icml-semi/) doi:10.1145/1143844.1143863

BibTeX

@inproceedings{brefeld2006icml-semi,
  title     = {{Semi-Supervised Learning for Structured Output Variables}},
  author    = {Brefeld, Ulf and Scheffer, Tobias},
  booktitle = {International Conference on Machine Learning},
  year      = {2006},
  pages     = {145-152},
  doi       = {10.1145/1143844.1143863},
  url       = {https://mlanthology.org/icml/2006/brefeld2006icml-semi/}
}