Gradient-Based Boosting for Statistical Relational Learning: The Relational Dependency Network Case
Abstract
Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.
Cite
Text
Natarajan et al. "Gradient-Based Boosting for Statistical Relational Learning: The Relational Dependency Network Case." Machine Learning, 2012. doi:10.1007/S10994-011-5244-9Markdown
[Natarajan et al. "Gradient-Based Boosting for Statistical Relational Learning: The Relational Dependency Network Case." Machine Learning, 2012.](https://mlanthology.org/mlj/2012/natarajan2012mlj-gradientbased/) doi:10.1007/S10994-011-5244-9BibTeX
@article{natarajan2012mlj-gradientbased,
title = {{Gradient-Based Boosting for Statistical Relational Learning: The Relational Dependency Network Case}},
author = {Natarajan, Sriraam and Khot, Tushar and Kersting, Kristian and Gutmann, Bernd and Shavlik, Jude W.},
journal = {Machine Learning},
year = {2012},
pages = {25-56},
doi = {10.1007/S10994-011-5244-9},
volume = {86},
url = {https://mlanthology.org/mlj/2012/natarajan2012mlj-gradientbased/}
}