Laplace Maximum Margin Markov Networks
Abstract
Learning sparse Markov networks based on the maximum margin principle remains an open problem in structured prediction. In this paper, we proposed the Laplace max-margin Markov network (LapM 3 N), and a general class of Bayesian M 3 N (BM 3 N) of which the LapM 3 N is a special case and enjoys a sparse representation. The BM 3 N is built on a novel Structured Maximum Entropy Discrimination (SMED) formalism, which offers a general framework for combining Bayesian learning and max-margin learning of log-linear models for structured prediction, and it subsumes the unsparsified M 3 N as a special case. We present an efficient iterative learning algorithm based on variational approximation and existing convex optimization methods employed in M 3 N. We show that our method outperforms competing ones on both synthetic and real OCR data.
Cite
Text
Zhu et al. "Laplace Maximum Margin Markov Networks." International Conference on Machine Learning, 2008. doi:10.1145/1390156.1390314Markdown
[Zhu et al. "Laplace Maximum Margin Markov Networks." International Conference on Machine Learning, 2008.](https://mlanthology.org/icml/2008/zhu2008icml-laplace/) doi:10.1145/1390156.1390314BibTeX
@inproceedings{zhu2008icml-laplace,
title = {{Laplace Maximum Margin Markov Networks}},
author = {Zhu, Jun and Xing, Eric P. and Zhang, Bo},
booktitle = {International Conference on Machine Learning},
year = {2008},
pages = {1256-1263},
doi = {10.1145/1390156.1390314},
url = {https://mlanthology.org/icml/2008/zhu2008icml-laplace/}
}