Hidden Common Cause Relations in Relational Learning

Abstract

When predicting class labels for objects within a relational database, it is often helpful to consider a model for relationships: this allows for information between class labels to be shared and to improve prediction performance. However, there are different ways by which objects can be related within a relational database. One traditional way corresponds to a Markov network structure: each existing relation is represented by an undirected edge. This encodes that, conditioned on input features, each object label is independent of other object labels given its neighbors in the graph. However, there is no reason why Markov networks should be the only representation of choice for symmetric dependence structures. Here we discuss the case when relationships are postulated to exist due to hidden com- mon causes. We discuss how the resulting graphical model differs from Markov networks, and how it describes different types of real-world relational processes. A Bayesian nonparametric classification model is built upon this graphical repre- sentation and evaluated with several empirical studies.

Cite

Text

Silva et al. "Hidden Common Cause Relations in Relational Learning." Neural Information Processing Systems, 2007.

Markdown

[Silva et al. "Hidden Common Cause Relations in Relational Learning." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/silva2007neurips-hidden/)

BibTeX

@inproceedings{silva2007neurips-hidden,
  title     = {{Hidden Common Cause Relations in Relational Learning}},
  author    = {Silva, Ricardo and Chu, Wei and Ghahramani, Zoubin},
  booktitle = {Neural Information Processing Systems},
  year      = {2007},
  pages     = {1345-1352},
  url       = {https://mlanthology.org/neurips/2007/silva2007neurips-hidden/}
}