On Trustworthy Rule-Based Models and Explanations
Abstract
A task of interest in machine learning (ML) is that of ascribing explanations to the predictions made by ML models. Furthermore, in domains deemed high risk, the rigor of explanations is paramount. Indeed, incorrect explanations can and will mislead human decision makers. As a result, and even if interpretability is acknowledged as an elusive concept, so-called interpretable models are employed ubiquitously in high-risk uses of ML and data mining (DM). This is the case for rule-based ML models, which encompass decision trees, diagrams, sets and lists. This paper relates explanations with well-known undesired facets of rule-based ML models, which include negative overlap and several forms of redundancy. The paper develops algorithms for the analysis of these undesired facets of rule-based systems, and concludes that well-known and widely used tools for learning rule-based ML models will induce rule sets that exhibit one or more negative facets.
Cite
Text
Siala et al. "On Trustworthy Rule-Based Models and Explanations." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025. doi:10.1007/978-3-032-06078-5_10Markdown
[Siala et al. "On Trustworthy Rule-Based Models and Explanations." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025.](https://mlanthology.org/ecmlpkdd/2025/siala2025ecmlpkdd-trustworthy/) doi:10.1007/978-3-032-06078-5_10BibTeX
@inproceedings{siala2025ecmlpkdd-trustworthy,
title = {{On Trustworthy Rule-Based Models and Explanations}},
author = {Siala, Mohamed and Planes, Jordi and Marques-Silva, João},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2025},
pages = {166-184},
doi = {10.1007/978-3-032-06078-5_10},
url = {https://mlanthology.org/ecmlpkdd/2025/siala2025ecmlpkdd-trustworthy/}
}