Learning First-Order Probabilistic Models with Combining Rules
Abstract
First-order probabilistic models allow us to model situations in which a random variable in the first-order model may have a large and varying numbers of parent variables in the ground ("unrolled") model. One approach to compactly describing such models is to independently specify the probability of a random variable conditioned on each individual parent (or small sets of parents) and then combine these conditional distributions via a combining rule (e.g., Noisy-OR). This paper presents algorithms for learning with combining rules. Specifically, algorithms based on gradient descent and expectation maximization are derived, implemented, and evaluated on synthetic data and on a real-world task. The results demonstrate that the algorithms are able to learn the parameters of both the individual parent-target distributions and the combining rules.
Cite
Text
Natarajan et al. "Learning First-Order Probabilistic Models with Combining Rules." International Conference on Machine Learning, 2005. doi:10.1145/1102351.1102428Markdown
[Natarajan et al. "Learning First-Order Probabilistic Models with Combining Rules." International Conference on Machine Learning, 2005.](https://mlanthology.org/icml/2005/natarajan2005icml-learning/) doi:10.1145/1102351.1102428BibTeX
@inproceedings{natarajan2005icml-learning,
title = {{Learning First-Order Probabilistic Models with Combining Rules}},
author = {Natarajan, Sriraam and Tadepalli, Prasad and Altendorf, Eric and Dietterich, Thomas G. and Fern, Alan and Restificar, Angelo C.},
booktitle = {International Conference on Machine Learning},
year = {2005},
pages = {609-616},
doi = {10.1145/1102351.1102428},
url = {https://mlanthology.org/icml/2005/natarajan2005icml-learning/}
}