Learning Multiple Relational Rule-Based Models

Abstract

We present a method for learning multiple relational models for each class in the data. Bayesian probability theory offers an optimal strategy for combining classifications of the individual concept descriptions. Here we use a tractable approximation to that theory. Previous work in learning multiple models has been in the attribute-value realm. We show that stochastically learning multiple relational (first-order) models consisting of a ruleset for each class also yields gains in accuracy when compared to the accuracy of a single deterministically learned relational model. In addition we show that learning multiple models is most helpful when the hypothesis space is "flat" with respect to the gain metric used in learning.

Cite

Text

Ali et al. "Learning Multiple Relational Rule-Based Models." Pre-proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics, 1995.

Markdown

[Ali et al. "Learning Multiple Relational Rule-Based Models." Pre-proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics, 1995.](https://mlanthology.org/aistats/1995/ali1995aistats-learning/)

BibTeX

@inproceedings{ali1995aistats-learning,
  title     = {{Learning Multiple Relational Rule-Based Models}},
  author    = {Ali, Kamal M. and Brunk, Clifford and Pazzani, Michael J.},
  booktitle = {Pre-proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics},
  year      = {1995},
  pages     = {8-14},
  volume    = {R0},
  url       = {https://mlanthology.org/aistats/1995/ali1995aistats-learning/}
}