The Most Generative Maximum Margin Bayesian Networks

Abstract

Although discriminative learning in graphical models generally improves classification results, the generative semantics of the model are compromised. In this paper, we introduce a novel approach of hybrid generative-discriminative learning for Bayesian networks. We use an SVM-type large margin formulation for discriminative training, introducing a likelihood-weighted \ell^1-norm for the SVM-norm-penalization. This simultaneously optimizes the data likelihood and therefore partly maintains the generative character of the model. For many network structures, our method can be formulated as a convex problem, guaranteeing a globally optimal solution. In terms of classification, the resulting models outperform state-of-the art generative and discriminative learning methods for Bayesian networks, and are comparable with linear and kernelized SVMs. Furthermore, the models achieve likelihoods close to the maximum likelihood solution and show robust behavior in classification experiments with missing features.

Cite

Text

Peharz et al. "The Most Generative Maximum Margin Bayesian Networks." International Conference on Machine Learning, 2013.

Markdown

[Peharz et al. "The Most Generative Maximum Margin Bayesian Networks." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/peharz2013icml-most/)

BibTeX

@inproceedings{peharz2013icml-most,
  title     = {{The Most Generative Maximum Margin Bayesian Networks}},
  author    = {Peharz, Robert and Tschiatschek, Sebastian and Pernkopf, Franz},
  booktitle = {International Conference on Machine Learning},
  year      = {2013},
  pages     = {235-243},
  volume    = {28},
  url       = {https://mlanthology.org/icml/2013/peharz2013icml-most/}
}