Consistent and Tractable Algorithm for Markov Network Learning

Abstract

Markov network (MN) structured output classifiers provide a transparent and powerful way to model dependencies between output labels. The MN classifiers can be learned using the M3N algorithm, which, however, is not statistically consistent and requires expensive fully annotated examples. We propose an algorithm to learn MN classifiers that is based on Fisher-consistent adversarial loss minimization. Learning is transformed into a tractable convex optimization that is amenable to standard gradient methods. We also extend the algorithm to learn from examples with missing labels. We show that the extended algorithm remains convex, tractable, and statistically consistent.

Cite

Text

Franc et al. "Consistent and Tractable Algorithm for Markov Network Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2022. doi:10.1007/978-3-031-26412-2_27

Markdown

[Franc et al. "Consistent and Tractable Algorithm for Markov Network Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2022.](https://mlanthology.org/ecmlpkdd/2022/franc2022ecmlpkdd-consistent/) doi:10.1007/978-3-031-26412-2_27

BibTeX

@inproceedings{franc2022ecmlpkdd-consistent,
  title     = {{Consistent and Tractable Algorithm for Markov Network Learning}},
  author    = {Franc, Vojtech and Prusa, Daniel and Yermakov, Andrii},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2022},
  pages     = {435-451},
  doi       = {10.1007/978-3-031-26412-2_27},
  url       = {https://mlanthology.org/ecmlpkdd/2022/franc2022ecmlpkdd-consistent/}
}