Bottom-up Induction of Oblivious Read-Once Decision Graphs: Strengths and Limitations
Abstract
We report improvements to HOODG, a supervised learning algorithm that induces concepts from labelled instances using oblivious, read-once decision graphs as the underlying hypothesis representation structure. While it is shown that the greedy approach to variable ordering is locally optimal, we also show an inherent limitation of all bottom-up induction algorithms, including HOODG, that construct such decision graphs bottom-up by minimizing the width of levels in the resulting graph. We report our empirical experiments that demonstrate the algorithm's generalization power. Introduction In supervised classification learning, one tries to find a structure, such as a decision-tree, a neural net, or a Boolean formula, that can be used to accurately predict the label of novel instances. A given concept can be represented by different structures that differ in many aspects, including comprehensibility, storage size, and query time. Decision trees provide one structure that is commonly const...
Cite
Text
Kohavi. "Bottom-up Induction of Oblivious Read-Once Decision Graphs: Strengths and Limitations." AAAI Conference on Artificial Intelligence, 1994.Markdown
[Kohavi. "Bottom-up Induction of Oblivious Read-Once Decision Graphs: Strengths and Limitations." AAAI Conference on Artificial Intelligence, 1994.](https://mlanthology.org/aaai/1994/kohavi1994aaai-bottom/)BibTeX
@inproceedings{kohavi1994aaai-bottom,
title = {{Bottom-up Induction of Oblivious Read-Once Decision Graphs: Strengths and Limitations}},
author = {Kohavi, Ron},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {1994},
pages = {613-618},
url = {https://mlanthology.org/aaai/1994/kohavi1994aaai-bottom/}
}