Entropy and Margin Maximization for Structured Output Learning

Abstract

We consider the problem of training discriminative structured output predictors, such as conditional random fields (CRFs) and structured support vector machines (SSVMs). A generalized loss function is introduced, which jointly maximizes the entropy and the margin of the solution. The CRF and SSVM emerge as special cases of our framework. The probabilistic interpretation of large margin methods reveals insights about margin and slack rescaling. Furthermore, we derive the corresponding extensions for latent variable models, in which training operates on partially observed outputs. Experimental results for multiclass, linear-chain models and multiple instance learning demonstrate that the generalized loss can improve accuracy of the resulting classifiers.

Cite

Text

Pletscher et al. "Entropy and Margin Maximization for Structured Output Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2010. doi:10.1007/978-3-642-15939-8_6

Markdown

[Pletscher et al. "Entropy and Margin Maximization for Structured Output Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2010.](https://mlanthology.org/ecmlpkdd/2010/pletscher2010ecmlpkdd-entropy/) doi:10.1007/978-3-642-15939-8_6

BibTeX

@inproceedings{pletscher2010ecmlpkdd-entropy,
  title     = {{Entropy and Margin Maximization for Structured Output Learning}},
  author    = {Pletscher, Patrick and Ong, Cheng Soon and Buhmann, Joachim M.},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2010},
  pages     = {83-98},
  doi       = {10.1007/978-3-642-15939-8_6},
  url       = {https://mlanthology.org/ecmlpkdd/2010/pletscher2010ecmlpkdd-entropy/}
}