A Bayesian Framework for Learning Rule Sets for Interpretable Classification
Abstract
We present a machine learning algorithm for building classifiers that are comprised of a small number of short rules. These are restricted disjunctive normal form models. An example of a classifier of this form is as follows: If $X$ satisfies (condition $A$ AND condition $B$) OR (condition $C$) OR $\cdots$, then $Y=1$. Models of this form have the advantage of being interpretable to human experts since they produce a set of rules that concisely describe a specific class. We present two probabilistic models with prior parameters that the user can set to encourage the model to have a desired size and shape, to conform with a domain-specific definition of interpretability. We provide a scalable MAP inference approach and develop theoretical bounds to reduce computation by iteratively pruning the search space. We apply our method (Bayesian Rule Sets -- BRS) to characterize and predict user behavior with respect to in-vehicle context-aware personalized recommender systems. Our method has a major advantage over classical associative classification methods and decision trees in that it does not greedily grow the model.
Cite
Text
Wang et al. "A Bayesian Framework for Learning Rule Sets for Interpretable Classification." Journal of Machine Learning Research, 2017.Markdown
[Wang et al. "A Bayesian Framework for Learning Rule Sets for Interpretable Classification." Journal of Machine Learning Research, 2017.](https://mlanthology.org/jmlr/2017/wang2017jmlr-bayesian/)BibTeX
@article{wang2017jmlr-bayesian,
title = {{A Bayesian Framework for Learning Rule Sets for Interpretable Classification}},
author = {Wang, Tong and Rudin, Cynthia and Doshi-Velez, Finale and Liu, Yimin and Klampfl, Erica and MacNeille, Perry},
journal = {Journal of Machine Learning Research},
year = {2017},
pages = {1-37},
volume = {18},
url = {https://mlanthology.org/jmlr/2017/wang2017jmlr-bayesian/}
}