An L1 Regularization Framework for Optimal Rule Combination

Abstract

In this paper ℓ_1 regularization is introduced into relational learning to produce sparse rule combination. In other words, as few as possible rules are contained in the final rule set. Furthermore, we design a rule complexity penalty to encourage rules with fewer literals. The resulted optimization problem has to be formulated in an infinite dimensional space of horn clauses R _ m associated with their corresponding complexity $\mathcal{C}_m$ . It is proved that if a locally optimal rule is generated at each iteration, the final obtained rule set will be globally optimal. The proposed meta-algorithm is applicable to any single rule generator. We bring forward two algorithms, namely, ℓ_1FOIL and ℓ_1Progol. Empirical analysis is carried on ten real world tasks from bioinformatics and cheminformatics. The results demonstrate that our approach offers competitive prediction accuracy while the interpretability is straightforward.

Cite

Text

Han and Wang. "An L1 Regularization Framework for Optimal Rule Combination." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2009. doi:10.1007/978-3-642-04180-8_50

Markdown

[Han and Wang. "An L1 Regularization Framework for Optimal Rule Combination." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2009.](https://mlanthology.org/ecmlpkdd/2009/han2009ecmlpkdd-l1/) doi:10.1007/978-3-642-04180-8_50

BibTeX

@inproceedings{han2009ecmlpkdd-l1,
  title     = {{An L1 Regularization Framework for Optimal Rule Combination}},
  author    = {Han, Yanjun and Wang, Jue},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2009},
  pages     = {501-516},
  doi       = {10.1007/978-3-642-04180-8_50},
  url       = {https://mlanthology.org/ecmlpkdd/2009/han2009ecmlpkdd-l1/}
}